Digital Literacies and 21st Century Skills

Digital Literacies and 21st Century Skills Trailer Bonus Episode 96 Season 11

Discerning Truth in the AI Era (Michelle and Ryan)

Discerning Truth in the AI Era (Michelle and Ryan)Discerning Truth in the AI Era (Michelle and Ryan)

00:00
In this episode, Ryan and Michelle dive into the critical topic of disinformation, explaining its distinction from misinformation. They explore the historical context of disinformation, its modern amplification through technology like generative AI and DeepFakes, and the real-world impacts these technologies have already had. The discussion highlights the complexities of identifying and combating disinformation, the role of media literacy, and the importance of structural changes and educational reform. The episode underscores the necessity for vigilance, critical thinking, and informed consumption of information in the digital age.

References

Alba, D. (2023, May 22). Fake image of Pentagon explosion goes viral, briefly spooks markets. The New York Times.
https://www.nytimes.com/2023/05/22/technology/pentagon-explosion-ai-image.html

Bulger, M., & Davison, P. (2018). The promises, challenges, and futures of media literacy. Journal of Media Literacy Education, 10(1), 1-21. https://digitalcommons.uri.edu/jmle/vol10/iss1/1/

Graham, T. (2023). The incredible creativity of deepfakes — and the worrying future of AI [Video]. TED Conferences.
 https://www.ted.com/talks/tom_graham_the_incredible_creativity_of_deepfakes_and_the_worrying_future_of_ai

Johnson, E., & Darnovsky, M. (2020). The disinformation dilemma [Audio podcast episode]. In Brave New Planet (Episode 2). Pushkin Industries.  https://bravenewplanet.fm/episodes/the-disinformation-dilemma

Marcelo, P. (2023, May 23). AI-generated image of Pentagon explosion triggers brief stock market dip. Associated Press.
https://apnews.com/article/pentagon-explosion-misinformation-stock-market-ai

Seitz-Wald, A., & Memoli, M. (2024, January 22). Fake Joe Biden robocall tells New Hampshire Democrats not to vote Tuesday. NBC News.  https://www.nbcnews.com/politics/2024-election/fake-joe-biden-robocall-tells-new-hampshire-democrats-not-vote

Spies, S. (2020). Producers of disinformation. MediaWell.
https://mediawell.ssrc.org/research-reviews/producers-of-disinformation/

What is Digital Literacies and 21st Century Skills?

Podcast for the Digital Literacies and 21st Century Skills course at Adelphi University's Educational Technology program.

Ryan
Hello, my name is Ryan.

Michelle
And I'm Michelle. And this week we're gonna be exploring disinformation.

Ryan
Yeah. So coming off the heels on, like, the misinformation unit, it's important to understand the difference between misinformation and disinformation. So while they both have to do with the spread of false or misleading information, misinformation is done without malicious intent, where disinformation is deliberately created to spread with malicious intent.

And I think there are a couple important things to understand when discussing disinformation. Those things are, you know, considering who created it, what their intents and purposes are behind it, and what we have to do to combat it.

Michelle
For sure. And like you said earlier in our conversation before this, disinformation isn't anything new to us. It's something that we established back all the way in political war times in the early 19th and 18th centuries as well.

Ryan
Yeah, absolutely. So in the production of disinformation article, it breaks down pretty well the sources of where they come from—being like governments, political actors, partisan media. And yeah, they also discuss how disinformation is not a new idea. Although its scale and speed has been amplified a lot in the modern age, historical use of disinformation is pretty interesting. And something that's also discussed in episode two of the podcast called Brave New Planet. We've all sort of learned about propaganda in our history classes. We knew that disinformation through news and publications is not a new idea, but specifically altering or creating fake imagery is not new either.

You know, today you think about Photoshop and now generative AI, but humans have been interested in altering images since photography was basically invented, far before the digital age. There's a famous example of a picture of Abraham Lincoln where a Civil War portrait artist superimposed his face onto another senator's body because the artist thought that his sort of tall and lanky frame was not very dignified, I guess you could say. And it became a favorite pastime of a lot of dictators in the 20th century—with Stalin removing people who fell out of public favor or were killed or jailed. Hitler, Mao, and Castro all did it. U.S. agencies are likely to have done it too.

But back then, manipulating these images was a highly skilled and time-consuming process. And it still was early in the digital age, but now it's becoming democratized. The average person can record, manipulate, and distribute the content by themselves with very advanced technologies like generative AI and deepfakes.

And there are a couple examples that I've seen in the modern day already. I don't know if anybody remembers, a while ago when Kim Jong-un released a photo of a missile test in the ocean and it made headlines for being debunked as Photoshop. And even more recently with generative AI, there was a case in 2023 where a fake image of an explosion at the Pentagon briefly crashed the stock markets. So that was a real-world effect from generative AI. And another one was in 2024. This company got caught making fake robocalls of Joe Biden's voice in New Hampshire telling Democrats not to vote in the primaries. So this could be the tip of the iceberg in this new technology for creating and spreading disinformation.

Michelle
I agree with you there. I think that definitely it is just the tip of the iceberg. And we've seen some of the effects of it already, like from misinformation and disinformation. Like with elderly people, there's always credit card scamming the elderly or phishing emails being sent from your boss about some $800 gift card. Like those ones are more of our misinformation—malicious, but without that political gain necessarily that disinformation has in terms of trying to sway things.

I know referenced a lot was climate change and all the disinformation about the spread and creation of climate change has kind of made it so it's been less effective—the concern necessary for climate change, according to some. But it's definitely been interesting, especially for me. I listened to a podcast of a TED Talk briefly that was called The Incredible Creativity of Deepfakes and the Worrying Future of AI. And they actually were able to show a sound clip that was a video of a female Spanish singer that I'll link in here right now so you guys can hear what I'm talking about.

That changes her voice from a Hispanic woman singing into Visi and him actually singing and performing this song. But it looks as if he is singing in Spanish. However, this man, according to the TED Talk, does not understand or know a lick of Spanish. So like the generative power of being able to manipulate video now is interesting to see, and it's gonna be interesting how we as a society react and evolve from this in the 21st century.

I know when we were kids growing up, all we were analyzing in school was DBQs. It was all DBQs, DBQs, DBQs—and how the political images that were being shown were propaganda or things like that. It was like, what is the malicious intent of the government in portraying this image of Russia or Germany or wherever, and how is it being portrayed?

Now it's like thinking, okay, what is this generative AI image and how are we going to adapt as a society with it? So it's very interesting and curious to think.

Ryan
Yeah, it's pretty interesting 'cause with political cartoons and stuff, you see the messaging and you can understand the intent behind it. Like, you know that it's somebody who created it and drew it, and there's already the predisposition that you know it's a satirical take on something. It's making some sort of social commentary, whereas just trying to fool people with a real image can definitely skew their perception of reality.

Michelle
For sure. I'm just circling back now thinking about the phone call you talked about and how they recreated Biden's voice to prevent people from going and basically having voter fraud because they were telling people not to go out into the primaries. And it's just interesting to think—how else are they going to and will manipulate the world and history around us? And how do we teach people what is right and wrong?

Us growing up with it, we kind of figure it out ourselves. I know we read an article about how our generation, specifically in the misinformation week, knew how to spot deepfakes and spot false information. But those that were sharing it were those in their sixties and seventies on Facebook and things like that. They would just immediately go in and share. Whereas we as a generation, I feel, are more willing to investigate and research a little bit more.

Ryan
Yeah, absolutely. I think you can wrap that into the media literacy idea. It's definitely important to educate upcoming generations, even though older generations might be more susceptible to it. As technology advances, you're always gonna have to update your techniques and try to stay on top of it to better notice when something is AI-generated.

And that goes even beyond the political spheres that we're talking about here. Like you said with the music, there's already a massive influx of AI-generated music that's just being pumped onto streaming services and botted for money, as well as people impersonating artists and creating fake leaks—maybe to tarnish someone's reputation.

And another really scary sector that was in the news, I think earlier last year, was the idea of deepfake pornography. Anybody can create sexual images of people in their personal lives or celebrities and post it themselves. And it brings up a lot of scary questions about even consent—being used in deepfake or AI-generated images.

Michelle
For sure. With that, especially looking at your person—is your person still your person with generative images and deepfakes? The podcast I listened to talked about having to go through the process of getting copyrighted for your AI-generated avatar. Is that picture your picture?

We've talked about this before with generative AI and things like that—what you create and how it's created is technically public license. So like, a computer created that image of you from likelihoods and likenesses of you and what you've imported. But how does that tailor to you? How does that then belong to your person, and whose right is it to that?

So with deepfakes, it's interesting to think about—if somebody creates your image or an image of you and your likeness, even though it might have one thing that's slightly off or some exaggerated feature, it's still you in essence. And then next thing you know, your whole career is gone or your whole reputation is gone because you've just got thrown under the bus. And how do you prove a deepfake?

That was something that was talked about too—how do you prove disinformation? A lot of what we found in disinformation, you can't find the source. Or the source is so well hidden behind more disinformation and disinformation on top of disinformation, where it's sourcing itself in other places—just so you're digging a deeper hole of false information being fed to you that it's hard to spot the lie.

Ryan
Yeah. So like with what you're saying about the personhood, that immediately reminded me—this was more of an obvious example, I don’t think anybody thought this was actually Taylor Swift—but when they created the AI Taylor Swift Trump ads, I think it was like she was supporting Trump. Yeah, obviously most people did not think that was actually Taylor Swift, but I’m just wondering—I kind of didn’t hear about anything after that. Is there any actual repercussions that people can face for using her likeness without her consent and using her public image to advertise something she doesn’t actually agree with?

And it’s kind of like holding some accountability with what you said about diving deeper and deeper into finding sources. That goes back to the media literacy thing—it has to be a constant battle because the usual checkmarks are not always going to work. Checking the sources in your standard places and standard research into who wrote it—it's not always going to be a catch-all method to spot disinformation.

Michelle
Just because something's cited doesn’t mean it’s actually relevant to what’s being cited.

Ryan
Right, exactly. So media literacy, although definitely important and something we should absolutely invest into, I also agree with what Bulger and Davidson say in their article—that it's not enough by itself to combat disinformation entirely. It falls short in certain areas, as people are often overconfident in their abilities to not be fooled.

And there are also socioeconomic barriers to developing these skills for everyone. And overemphasis on individual responsibility creates a blind spot for people who don’t think they need to emphasize the need for structural change. That kind of reminds me of how you said earlier, like environmental campaigns, about the personal carbon footprint campaign that BP pushed early in the 2010s I think it was.

Yeah, they just—through a media campaign—pushed the responsibility for global warming on the individual level. And I remember even it made its way into my school textbooks—understanding your own personal carbon footprint. And it’s just more of a way for them to distract from how much damage their company does to the environment and try to push that accountability onto other people. So that’s just another perfect example.

In that aspect, structural policy changes definitely are an important step that needs to be taken. And also holding social media platforms and other people accountable. For example, in Europe they have a Digital Services Act that forces this level of transparency on certain social media platforms.

Michelle
And the use of adding “created by generative AI” and stuff like that on Instagram that I saw floating around for a little bit.

Ryan
Yeah, yeah, absolutely. And I think things like that, and educational reform to invest in educating the future generations on up-to-date media literacy techniques—working in tandem with grassroots fact-checking efforts—are all necessary ways forward in my eyes at least.

Michelle
Mm-hmm. I don’t know if you have, but I’ve jumped into the digital literacy standards a little bit since they’re gonna be implemented next school year. And just figuring out how to get our curriculum up in sync with that. Looking at it, it’s difficult to integrate that media literacy into things and how to correctly teach. 'Cause we’re used to always teaching students how to research, how to find real evidence through text, how to find evidence through scholarly journals and things like that. But now it's like anybody can create anything.

So how do you work around creating that and teaching kids? It’s definitely going to be a learning curve for us too. Because we are all learning how to navigate this disinformation world as well. I can only imagine you being in a high school or middle school setting and trying to teach disinformation about war and poverty-stricken nations that are experiencing a lot more of this.

Ryan
Yeah. So going back to those digital media literacies that you mentioned, I agree that it definitely seems like a complicated thing to integrate and it’s going to be an organized, large-scale effort. But it needs to be done. Because looking at those skills you need to educate these students in—it’s impossible for one class or one teacher to be able to successfully develop those skills in a student.

The idea of dispersing these ideas over many courses and many subjects and having it constantly present in almost every course in some way, shape, or form—it doesn’t have to be in totality—but a slow process throughout their whole education experience, I think, would be very beneficial to a lot of students.

Michelle
Yeah, I do see it being a challenge moving forward, especially just with how the impact of social media in school is in general. Media sources are less TV-based news, and there are so many paywalls blocking people from learning information or finding information out that it makes it easier for disinformation to spread.

Because even if you go to The New York Times, you only get, what, five articles a day or something like that? So you can’t dive deeper. Because if they are referencing an earlier New York Times episode that you’ve run out of articles for the day—well, guess what? Now you’re stuck with the information you’re given. You can’t magically make more money appear to be able to do it without going behind a paywall.

So media itself also creates barricades to the disinformation world.

Ryan
Yeah. You might not wanna pay The New York Times because you think that they are just making disinformation. So it’s really a hard landscape to navigate. But I think we definitely have viable solutions that, as long as we take steps in the right direction and make a concerted effort, we can definitely make an impact despite the dark horizon looming of AI generative technology.

Michelle
For sure. There’s definitely no positive outlook on this, but disinformation is not going anywhere. It’s just making sure that we as a generation and the generations to come know how to combat it.

So, as we've discussed in this episode, the rise of AI creates both opportunities and risks for how we consume information. It's clear that as consumers and teachers, we need to be vigilant and critical and informed about the sources that we trust.

This is Michelle,

Ryan
This is Ryan,

Michelle
And we're signing off. In an era where everyone can create and share content, it is up to all of us to determine what is real and what's not. Let's keep questioning, stay informed, and never stop learning.