Dan Seed (00:00):
Hello and welcome to Big Ideas, a podcast from Texas State University. I'm your host, Dan Seed from the School of Journalism and Mass Communication. This month we're diving into the world of artificial intelligence and DeepFakes. DeepFakes are especially relevant with the presidential election around the corner where bad actors could use the technology to try to influence the election and spread disinformation by creating content that purports to show a candidate saying or doing something that they didn't really do. In the interest of full disclosure, what you're listening to now is an artificial intelligence voice. It's my voice, but I'm not actually saying this right now. I uploaded my voice into a software called 11 Labs and then entered a script which it uses to produce my voice reading the intro. Pretty remarkable stuff. Right now it's me, the Real Me Speaking, and we're joined this month by my friend and colleague Dale Blasingame from the School of Journalism and Mass Communication. Dale is an associate professor of practice who is an Emmy award-winning television journalist, and now teaches all the cool fun things in our digital media innovation program. He's the 2017 recipient of the Presidential Excellence Award for teaching and the 2023 recipient of the Presidential Excellence Award for service. Congratulations on that, Dale, and welcome to the show. Thanks for being here.
Dale Blasingame (01:23):
Thanks for having me back.
Dan Seed (01:25):
Yeah, you're a two time guest. Last time we had you, we were talking about your study in America program. This is clearly a lot different, a lot different tone, a lot different topic here, but it's something that you're deeply knowledgeable about. ai, DeepFakes, all that stuff is in the news for our audience out there. I think AI is something that people are pretty aware of right now, but give us the run through in terms of what AI is, how it's used day to day. I think there's things that people use that they don't realize, oh, that's ai.
Dale Blasingame (01:55):
Yeah, in fact, we've been using AI in our daily lives for many, many years now. Some of the more common uses of ai, I guess would be GPS determining what's the best route to take, and then recalculating, once problems pop up on a route bank fraud and credit card fraud security. So when your bank account or your credit card account realizes that there's an unusual pattern or something unusual that happens amongst the pattern of your spending, they send you an alert from that face ID on your phone is ai. So a lot of these everyday common things that are just absolutely built into the fabric of how we live our lives now are AI without blasting to the world that they're AI on a daily basis.
Dan Seed (02:42):
Now, AI is coming to the forefront of discussion, whether it's in commerce, education, a lot of the tone that's around it feels very negative, kind of scary in a sense. Talk about that a little bit. Just this idea that we have all those things that do good, and then now we're kind of moving into this, oh my God, the robots are coming.
Dale Blasingame (03:05):
There is two sides of the coin on this equation. AI has made our lives a lot better, a lot more efficient, and I think still has a lot of potential to do that. But there are obviously some things to worry about. I'm still not totally sold on self-driving cars. That would be amazing if we can get that worked out. But I'm still, if a Waymo pulled up to pick me up in downtown Austin, I would probably still be kind of concerned of that happening, despite it probably being much safer than me driving a car or someone else driving a car and texting and being distracted and all of that. So I think a lot of it's just fear of the unknown, which tends to happen a lot with new technologies. A lot of same things were said about social media, and you can argue that a lot of those things came true. So there is reason for concern many times with a lot of these, but I also try to keep my mind wrapped around what are the potential positive uses of this as well without just solely focusing on the fear and concern that comes along with 'em.
Dan Seed (04:09):
Well, the social media point is really interesting because social media, as we found, it's wonderful at times in the way that it connects us, but we can overload ourselves. Is this something that you view as something that is best in moderation?
Dale Blasingame (04:25):
I think it's all about in how it's being used. I don't know if moderation is going to be as bigger role here, because like I said, we're already using AI so much in our lives without really knowing it or realizing that it's ai. So it's really in what you choose to do it. So I'll use our students as a perfect example. I talk with them a lot about the role of AI in both their education, but also in their future professional careers. I always point to the most common what we think of now as traditional ai, the most common traditional AI tools like Chad, GBT or the other large language models, they should be a good teammate. They shouldn't be the team. If they're the team, then why are you serving a purpose if the machine is doing everything for you? And so especially in our field, Dan, mass communication,
(05:17):
Whether it's pr, advertising, journalism, digital creation, it's being impacted by AI tools like this. But again, the human still plays an incredibly important role in that equation. And I always make myself, if I am in the process of creating content or creating an idea or a game plan for something, creating a lecture, creating my slides, I always have my own rough idea of what I want to do. And then I plug that into the ai. Just like I might ask you like, Hey, I'm doing a lecture on video storytelling, which is one of your strong suits. What am I missing here? What else should I cover? What things am I overlooking or what am I doing too much of? What should I dial back on in order to save time with that? And I think students can take that same aspect and apply it in a smart way to their work.
(06:10):
So that's always how I think about especially large language models and things like that. I treat 'em as a good teammate, not as the team, because I don't want myself to be. There's that constant fear. You hear, well, AI is going to take everyone's jobs. Well, not if you don't let it. You prove that you're a valuable person to whatever employer or business or brand that you're working with. And so I think that's still really important to not go overboard on it and not rely on it too much while also recognizing it can be incredibly helpful and can streamline your processes and help you realize things that you didn't. Like I said, I run every lecture, I run all my slides through Chad, GBT or something very similar, and in several instances it's brought up things where I was like, oh man, I didn't even think to cover that. Or it'll say, Hey, in addition to this, you should also look at this and connect some dots that I maybe wasn't doing the best job of doing that. So I think it's making me a better teacher, which is the whole point of it. It's not to be the teacher. It's here to make me a better teacher hopefully and help students learn and things like that. So I hope people always keep that in mind when they're thinking about AI or potential uses of it.
Dan Seed (07:28):
Yeah, I couldn't agree more with you on that, especially now when you go on Google, you search something and then the first part is the Google AI that comes up with answers for you, but there's links with it, so you're not just getting random stuff. It's like, well, let me look at that a little deeper. I use that quite a bit. Media management class that I teach where we're talking about the business side of the media business. And previously if I wanted to know how much revenue a television company made, I might have to dig through a quarterly report or a year end report to find that. And now it just shows up there. And now I can look at it, click the link, go in, make sure everything is kosher with it and include it into the lecture. And at the first part of this episode, the intro that people heard was ai, where I uploaded my voice to 11 labs program. Then I write the script, I put it in and it spits out my voice. It's something that's really helped me with my recorded lectures in saving time because it will spit that out in seconds, and then it's just a matter of me adding my slides to it. When you're talking with your students or doing stuff with your students, what are some ways that you're incorporating it into the classroom? Because this isn't something we can be afraid of, we've got to introduce them to this, but there's that delicate balance as well, like you mentioned.
Dale Blasingame (08:52):
Yeah, and on that note, really quickly before I answer that question, one of the best use cases I've had, I gave a talk about DeepFakes in New Braunfels a couple months ago, and someone in the back of the room was an author and he said, I now handle the audio books of my books in about five minutes. Whereas in the past it took me weeks to record and I was only recording in one language. He said, now I can record my books in every language in the world within a matter of minutes from that. So that's another really useful case of AI technology. So a couple of the things I'm doing, I'm not doing, I don't want to make it seem like I'm doing anything earth shattering in the classroom, but I do think it's important for students to be AI literate because their workplaces are going to, in many cases, or in most cases, expect them to have some level of fluency when it comes to these AI tools that we now think of every day.
(09:47):
We're getting ready this week in my social media class to do a writing workshop where the first half of it is we write on our own, and then the second half is we have AI write for us, and then we compare and contrast which ways were we better than the ai, were there any ways where the AI was better? What did the AI do wrong? Because it just doesn't know our client as well as we do as human beings. And that's always kind of an eyeopening experience for the students because you do hear rumblings, especially in the social media marketing world that a lot of, I don't want to say a lot, but some people have turned over their content creation completely a tool like Chad, GBT, and that really frightens me because the models are good and they're always going to be getting better.
(10:29):
One thing I say a million times is this is the worst the technology is ever going to be. It's only going to get better from here. It's never going to go back in terms of efficiency and accuracy you would hope. I just don't think we're there yet. And that again, kind of eliminates the human part of the equation. And that's where, again, I think a lot of that is fear mongering to some extent, but it is happening. We can't ignore that. Some people have turned over 100% of their content creation aspects over to ai. So like I said, it's kind of eyeopening for the students to see that, oh man, I do write better captions than ai. But then there are some where it's like, I really like that. That would be something we would absolutely use for the client. It is not a foolproof method, but it does get some stuff, right.
(11:18):
I also have them in one of my other classes, they're allowed to use AI image generators instead of taking their own images. So that's been a problem in that class. It's a very large class. It's usually about five or 600 students, and they have a WordPress assignment that they do, and a lot of them like to write about video games or sports or things like that where they have no access to taking screen grabs isn't good enough. And so the AI generation element of that has really opened up some new avenues for them to be able to create content about topics that they, otherwise I would've probably moved them into a different area with that. So that's been interesting. Last semester, I had one of the best examples of that assignment last semester was a student chose to do his whole project on how to teach people how to use AI image generators.
(12:10):
So he went around campus and shot a bunch of pictures of old Maine and the student center and then turned them into Star Wars scenes using Dolly, and it was really cool. And then his video part of it was doing a step-by-step tutorial of how he did it. He did a fantastic job with that, and I referenced him in most of the talks that I give about AI now because that's an example of a student being interested in something and then also going out and doing it, learning it. He had very little experience with image generators before he did that. He felt confident enough at the end of it to record tutorial videos and put himself out there. Now as someone who might be able to help someone else learn how to do this technology and also be super creative, he was super into Star Wars, and so he turned all the scenes into that type of situation. So it hit on a bunch of different levels of student getting really interested in the work they're doing in the class, also flexing their own creative muscles and learning a lot in the process. Whereas me showing him how to do that on a screen probably wouldn't have done it. Him getting in and getting his hands dirty while doing it really made an impact with him.
Dan Seed (13:22):
And it helps us learn too. I mean, this is new to us and a grad class I taught in the fall, one of the assignments was to create a story using ai, putting something a prompt into chat GPT. Then they had to fact check it, which was really important because for one of the students, it spit out a list of books that referenced the topic that she was going to use in her script. And none of the books were real, but everything else in there was true, accurate, factual. And then they were creating videos using ai. They created avatars, they use the voice generation, all that. And for me, it's really cool to see how this process works and to see them get into it like you're describing, you mentioned going out and speaking to people, and this, we will segue into DeepFakes here in a minute, to have yourself be an expert or other folks go out into the community and talk about these topics. How important is that for just society at large and for the university to play that role as well?
Dale Blasingame (14:26):
It's kind of why we're here, right?
Dan Seed (14:27):
To
Dale Blasingame (14:28):
Educate people. It's not kind of why we're here, it is why we're
Dan Seed (14:30):
Here.
Dale Blasingame (14:31):
So that always looking for opportunities to do that. I get a particular thrill and enjoyment out of speaking to those different types of groups. Primarily the ones that have been contacting me most recently are voting groups, nonpartisan voter registration groups or people who are generally related around election time. So I did I think three of those so far this year in different parts. I know I did the one in New Braunfels, I did one in Colleen and I'm drawing a blank on where the other one was. But those types of groups are especially interested, particularly when it comes to DeepFakes because that has a direct tie in to either confusing people when they go to the polls or things like that just as misinformation did before. It's just the next evolution of disinformation. And so it's been really eye-opening because not only do you get to share information with them, but also you get a lot of feedback.
(15:32):
So that new Braunfels group in particular, we had to cut the q and a off because I had to go home. We were 30 minutes over the allotted time for the session because people just kept having questions. And it wasn't all fear mongering. I don't want to make it seem like that, but the one question that will always stick in my brain is this elderly woman who was I think in her late eighties or early nineties, probably asked a lot of really good questions. I mean, every other question was hers, and she always had these really thoughtful questions. And I was talking about deepfake audio in particular is that's the one that has most experts. I don't call myself an expert, but most people who dedicating their lives to figuring this stuff out, most concerned with that. And her question was, what do I do?
(16:20):
And I told her the best answer is to not answer your phone because that's the vast majority of ways the fake audio is going to get you. And if you don't recognize a phone number that's not coming from someone you have contacted or you have in your contact list, don't answer it. They'll probably leave a voicemail and even treat those voicemails with 99% scrutiny where you're 100, I know I just said 99%, now I'm saying 100%, but you're totally skeptical as to what the situation is. And so we used to have this idea of seeing and hearing is believing, and now that's, we've been slowly eroding away at that with Photoshop first and then we've moved on to other technologies, and now we're at the point where even video is very easily faked and audio, especially when you have that visual component of something, your eyes can help you figure out the audio mismatch a lot of times. But when it's just audio, when you're just hearing something and it sounds so close to that person, especially if it's a loved one of yours saying something, that's a whole nother story. And that's where they're seeing the most success I use in air quotes with scamming people
(17:36):
From these. So yeah, that question always stuck in my brain of what can I do? And I wish I had a great answer for that. Don't answer your phone is probably not the best answer, but that was most logical, biggest impact thing I could think of, especially when you think about who answers their phone. It's primarily older folks. My
(17:57):
Parents always answered their phone and I would beg them, please don't answer your phone unless it's me or my sisters or your best friends. If someone needs to get ahold of you, they will get ahold of you in a way other than that, because my parents were very believing people, and if someone would've called them and said I was in trouble, my parents would've done anything they possibly could have to get me out of trouble. And so that worries me from that standpoint because a lot of other parents and grandparents are in the same situation.
Dan Seed (18:26):
Same thing happened to my grandmother. Somebody called said there was a problem at her bank, we need you to come. We're going to send a cab. And she was getting ready to go and called my mom, and my mom said, don't you dare go anywhere.
(18:43):
Because again, especially when it comes from something where they believe it's an institution, right? They're very trusting of institutions and a bank is an institution you can trust and therefore, okay, I'm going to do what the bank said. Again, we're joined by associate professor of practice, Dale Blasingame and Dale, I want to segue here. You did that nicely into deepfake. So this is a term that's been thrown around. We hear it, we've seen it, those kind of examples that you've talked about. But tell us exactly for our audience, what exactly is a deepfake? What does that mean?
Dale Blasingame (19:15):
Yeah, I use a definition from the US Government Accountability Office that says a deepfake is a video photo or audio recording or piece of content that seems real, but it's been changed with ai. So it runs the gamut. I think a lot of times we think of DeepFakes purely as video because video was kind of the first technology that we started seeing used with DeepFakes, particularly when it came to face swap technology. You have a real person or somebody's representation in that video, but then you morph someone else's face onto that to make it look like someone else. But as I mentioned, we're seeing usually the most damage from a scam perspective happening with deep deepfake audio. We're also seeing a lot of audio when it comes to politics as well of people. There was the famous Joe Biden robocall in New Hampshire in February where they were really sloppy in the execution of that deepfake audio.
(20:14):
If they would've been better at that, it would've fooled even more people. That's kind of the saving grace so far, is that a lot of folks who are actually creating this type of content usually get pretty sloppy with it. It still causes damage, don't get me wrong, but it kind of limits the amount of damage hopefully, because it doesn't take long to spot the sloppy stuff. It's the stuff that's really, really good, that gets really hard to discern. And I'm actually working on a research project with one of our colleagues, Nicole Stewart, right now on AI influencers, which are basically, I mean, they're not all this way, so I don't want to stereotype all of them as that, but a lot of them are basically AI thirst traps where it's usually female accounts that are run by men who are opening up either Patreon accounts or OnlyFans accounts or some other kind of monthly subscription or pay service, and then just creating this content on their computer using usually face swap technology. It started with photos, which photos were always, I don't want to say easy to do, but a lot easier to do than video.
(21:21):
But now the video technology is caught up so quickly. I saw one just this weekend that if I've been looking at this stuff for a year and change now, and a lot of them you can spot the video stuff. You're like, yeah, the body's moving weird. There's some weird morphing. This one account I came across this weekend is the best one I've ever seen. And you could have given me, I don't know how many guesses to figure out that it was ai, unless if that account didn't put it in the profile that it was a digital creator, I would've not thought that that was one. And so the technology, again, is the worst it's ever going to be. So it's only continuing to get better. And what I'm particularly interested in is on the audience side of things, the people who I'm always curious to know how many realize it's AI and how many just don't care, or how many think it's a real person and think they're communicating with a real person. We're probably not going to be able to tackle that in the paper we're working on. But we're kind of starting out at the ground level with some of this because there's not a lot of research being done right now on AI influencers.
Dan Seed (22:31):
So what are some ways, and then I want to get into how this could throw things into a tizzy with the election or any election coming forward, but what are some kind of basic guidelines that people can use that when they're looking at things against photos, fairly easy to tell Photoshop, but with video, what are some ways that people can look at something, be better at discerning real versus fake? Because the idea with a deepfake for a lot of these folks, it's intending to deceive. So on the profile of that account you mentioned they clearly put digital creator, so it gives you that clue, but some people are just putting stuff out there and claiming it as real or trying to pass it off as real. So what are some things people can look for, especially as we head into the home stretch of the election season?
Dale Blasingame (23:19):
It all fortunately or unfortunately comes down to media literacy. And the reason I say unfortunately is I don't think our country's done a very good job of teaching media literacy. So I always point to kind of three things. One is to always look at the source. If it's just a random person who's sharing this, I question it. Now, does that mean that the mainstream media or traditional news never gets things wrong? Absolutely not. Stuff happens, mistakes are made, errors are made, but in general, we have a much higher standard in the news industry or in the mainstream media industry for verifying and vetting. Is it foolproof? Absolutely not. That argument is, that's usually the counterpoint to that is, well, they get it wrong too. Well, that happens, but generally our standards should be higher when it comes to verifying, vetting and gatekeeping with that. If you can't find the source, that's a problem as well. If it's just one random person who's posting something, and even if it's getting a ton of engagement on social or something like that, if it's not sourced somewhere and you can't go back and find it, that's a problem. Now that requires work on the user's standpoint, and here's where the problem always gets into
Dan Seed (24:38):
Play. Yes,
Dale Blasingame (24:40):
Most people don't want to take the time to go through and verify or do homework to figure out if something is real or not, especially, and research has proven this, especially when it is something that confirms their already held beliefs. And that's what makes the political misinformation disinformation so powerful is we're so quick to believe something that proves our point that we are way more apt to share that without verifying it than we would be if it was something against one side or the other. So that's the biggest thing is look at the source. It's called provenance. Find the provenance, find the original source of that. And it's not easy. I mean, there are whole digital newsrooms built on this idea of providence where their sole job is to find original sources of material and verify and bet, and that's hard work. It is a little bit unrealistic to ask regular Joe Blow at home to say, oh no, you need to be able to digitally track the source of this. It's hard to do. So I get why people,
Dan Seed (25:42):
It's a rabbit hole to find that original source. It goes through so many iterations along the way.
Dale Blasingame (25:49):
Yeah, this weekend was a perfect example. One of my best friends is doing a fellowship at Stanford this year, and so he's just starting the semester and loving life at this beautiful college campus and remembering what it was like to be a college student. And he went to a volleyball match this weekend. Stanford's, of course, a volleyball powerhouse, and my friend is probably the biggest San Antonio Spurs fan I've ever known, and I saw a picture of Tim Duncan in the stands this weekend at the Stanford volleyball match, and it said, Tim Duncan watching his daughter play volleyball. Before I sent it to him, I was like, I just want to stop and make sure that this is accurate. And so I looked up, I tried to find the original source of the person. It came from a verified account, which verification means nothing anymore.
(26:34):
Verification now can be bought very easily, and in the past that used to give you at least a clue as to whether you might be able to trust that source's news gathering habits and verification efforts. But then I looked up the roster and I was like, oh, yeah, there she is. There's his daughter. Very clearly says it's his daughter. And so then I shared it with him and that sent him into a tizzy. He was actually at the match. He's like, how did I not realize Tim Duncan was sitting right across from me. So he started pouring through all his photos that he took at the match to do that. But again, I think in the excitement, I'm glad that I stopped and did that because I think a lot of times we don't, and I'll throw myself in that boat too. I do it just as much as everyone else. I'm not proud of it. But again, oftentimes that whole idea of stopping and figuring something out, it kind of kills the vibe of what you're trying to do in the moment on social, right, where you're trying to either prove a point or you're trying to get someone else to believe what you believe. And at that point, it's kind of like, well, I don't really care if it's true or not. This helps prove my argument. And that's a real, I think, dangerous opinion, I don't think a stance
Dan Seed (27:49):
For, yeah, well, we saw some of that recently with the P Diddy situation, and people were retweeting a photo that was supposedly Kamala Harris and P Diddy. In reality, the photo was of her with her then boyfriend, Montel Williams, and somebody had photoshopped his face and that gained traction, that spread like wildfire, and it doesn't take long. It takes the right people sending it up, and then all of a sudden it just kind of blows up when,
Dale Blasingame (28:22):
And the danger with that is research has shown that even when, so someone who has been exposed to misinformation or disinformation misinformation is inadvertent sharing false or fabricated material. Disinformation is intentional sharing or creation. The downside or the harsh reality of that is once you've been exposed to that misinformation or disinformation, even when someone tells you it's not real, your mind tends to continue to believe it as true. And by the time it takes, whether it's a news organization or Debunkers or whoever it is, fact checkers to debunk something, the damage has kind of already been done with situations like that because again, our brains kind still go back to, well, I saw it, therefore it was real.
Dan Seed (29:13):
Right? Seeing is believing.
Dale Blasingame (29:14):
Yeah, we have to get rid of that mentality of seeing is believing these days.
Dan Seed (29:19):
So we're getting tight on time here, so I've got a couple questions for you before we head out. One, how easy are these deep fakes to make? Is this something that the average person with average experience with editing, Photoshopping, et cetera, could easily turn out? Or is this something that's a more intensive kind of process?
Dale Blasingame (29:42):
It depends on what you're looking for in terms of quality. If you're looking for something that is truly going to pool a vast majority of people, it's still, I don't want to say difficult, but there's kind of a higher bar you have to cross to accomplish that goal if you're just looking to confuse or even if not, if you're just looking to be creative and do some stuff, I walk students through an example where I take a colleague of ours, Sarah Shields, I take her headshot for our university page and put it on another person's video, which is very common face swap technology. It did pretty well with it. It mirrored her chin and her mouth and her eyes especially didn't quite get some of the other features. Then I put my own picture on it. It's a female's face, and so I put my picture on it and it had long hair, and I talk about how great I would look if I had long hair, but that took 20 seconds to do, and it was pretty good. It was free. You mentioned 11 labs, which is depending on what you're trying to do, there are some free options. There are some paid options that you have to have, but even the paid options, I think it's something like five to seven bucks a month where you can train the model to record in your own voice, and it takes some work to get the tone down, especially on audio. The AI is not going to, what I've noticed a lot of times is the AI ends the sentence on an uptick or it doesn't get
Dan Seed (31:05):
Down. It doesn't get down. Yeah, it doesn't get that cadence down quite
Dale Blasingame (31:08):
Right. Yeah, but that's also a matter of you training the module model with better training material. So it's fairly easy to create this, and again, it's only going to continue to get easier and cheaper to do it. The days of somebody having to be a code master to be able to create stuff like this are long gone. You can these days find free simple tools to create content like this. It may not be the best, but it's good enough to confuse or show off your creativity. I don't want to always paint. Even DeepFakes sometimes are done for creative satire purposes, so I don't want to always paint them in a bad light, but you can relatively easily and quickly do those.
Dan Seed (31:50):
Now, last question for you, and I don't know if you have an answer for this. So teaching that media management course, I've been talking about DeepFakes even going back to 2016 before the election in 2020, and it was always kind of like, eh, this probably isn't going to be that election. It's probably not quite there yet in terms of technology. Photoshop's much easier, much faster. You can do that stuff on a larger scale quicker. Do you think that this could be the election cycle where DeepFakes play a more prominent role, or do you think that that's kind of still down the road a little bit?
Dale Blasingame (32:28):
I'm sure they will. I mean, we're already, like you mentioned, there's already plenty of examples of that happening. I'll bring this as an example of showing how the deepfake didn't necessarily work, but it caused something else to happen. When former President Trump shared the DeepFakes of Taylor Swift endorsing him or not Taylor Swift endorsing him, but her fans endorsing him,
(32:50):
That prompted Taylor Swift to then actually come out and endorse his opponent. So even though the deepfake may not have had its intended goal, it did have a consequence as part of that. So I'm sure that will happen. I say this kind of ironically or sarcastically that one of the positives, and that's also with the strongest air quotes possible, one of the positives of our current political situation is we are so dug in our beliefs right now that I don't know how many of us are actually changing our minds. So even if you see something really horrible, it's going to take a lot to get you to say, well, I'm never voting for that person, or I'm not voting for, I'm switching my vote to someone else. So
(33:38):
I know that that's kind of very pessimistic in nature and I hate saying that, but also the idea of there being that whole myth of the undecided voter, I just don't know that that really exists anymore. I think even people who say they're undecided voters really know who they're voting for. They're waiting to see if something might come along. And I don't know how many of those actually truly exist. I'm sure they're out there. I'm not trying to make in generalizations, but I think it has potential on that. One last note with that, Donio Sullivan, who's the main deepfake Debunker at c Nnn raised a really great point at South by Southwest this past year. I sat in on a panel that he was doing on debunking Dfas, and he brought up that, especially with audio, the thing he's most concerned is concerned about is it's not people creating stuff to confuse and make it look like somebody said something they didn't.
(34:37):
It's somebody having the automatic denial now of saying, I never said that. When they actually did say it, he said, that worries me so much more than somebody creating a fake audio clip of Harris or Trump saying something. They didn't say. He brought up the Access Hollywood tape. He's like, if that came up now, Trump could very easily say, I never said that. There was no video of him saying it, even if it was, that could be said. It was deep faked. And so these days, again, that was his biggest concern and that really helped change my mindset of it. It's not the creation of the content itself, it's just the environment we're living in. We're now a politician of any political background or any strike has this automatic excuse of, I never said that. I don't know what we do about that. There's only so much proving you can do that something happened. And again, like I said, we're so dug in on our beliefs that even if you prove it to be true, again, proving with air quotes there, there's still going to be a large percentage of an audience one way or another. Doesn't matter what political spectrum, what side of the political spectrum, who's either going to believe it or not believe it. And so it's a very difficult situation to be a part of.
Dan Seed (35:51):
I'd never really thought of it like that, but that makes a whole lot of sense. The reaction to what people see, as you mentioned the Taylor Swift or denying something, that's another avenue I had never really thought of, and now something more to think about. So Dale, thank you very much for being here, and I think the key takeaways here for people are education. Question what you see. Do your research don't necessarily take things at face value.
Dale Blasingame (36:18):
Yeah. When I give this talk to the voting groups, I always end with a slide that says, I'm sorry, I wish I had better answers, but I think you kind of succinctly summarized them there at the end.
Dan Seed (36:29):
Well, thank you, Dale. Thank you for your time and thank you for all your work in this. This is important stuff.
Dale Blasingame (36:34):
Appreciate it.
Dan Seed (36:35):
Thank you for the pleasure of your time in downloading and listening to another episode of Big Ideas. I want to say thank you to everybody that's downloading and listening to any episode over the last four years. It's been our pleasure to bring you this podcast every month, some of the most fascinating people that we've been able to talk to and topics, and we hope that that's enlightened your world a little bit, not only in terms of society at large, but also the neat and interesting things that are happening here at Texas State. With that said, this is our final episode of Big Ideas, and again, I want to thank you for downloading and listening. I want to thank the university for their support. I want to thank Jamie Blaske, specifically. Jamie is the producer, the guest Wrangler on our show, and it's been wonderful working with Jamie, and I want to thank all of our guests that have ever appeared on big Ideas, and I want to again, give a thank you to the audience for listening and giving me this opportunity to bring these stories to you, which was a truly educational experience for me as well.
(37:42):
So for the last time, this is big ideas, and until we meet each other again in some other form or fashion, stay well and stay informed.