Make an IIIMPACT - The User Inexperience Podcast

Hello, everyone, and welcome back to another enlightening episode of Make an IiIMPACT - The User Inexperience. Join our hosts, Makoto Kern, Joe Kraft, and Brynley Evans, to explore how AI is reshaping industries across the globe. 

In this thought-provoking discussion, three experts from IIIMPACT's UX & AI consulting team - Makoto Kern, Brynley Evans, and Joe Kraft - examine the intensifying competition between technology companies developing artificial intelligence. The conversation explores how this modern arms race is driving innovation at unprecedented speeds while raising serious questions about safety, ethics, and the future direction of AI. 

From analyzing the strategies of major players to considering the societal implications, this discussion offers viewers an insider perspective on one of the most consequential technological competitions of our time. Whether you're a technology professional, business leader, or simply curious about how AI will shape our future, this panel provides valuable insights into the forces driving AI development and what might lie ahead.

IIIMPACT has been in business for +20 years. Our growth success has been rewarded by being on the Inc. 5000 for the past 3 years in a row as one of the fastest-growing private companies in the US.  Product Strategy to Design to Development - we reshape how you bring software visions to life. Our unique approach is designed to minimize risk and circumvent common challenges, ensuring our clients can bring innovative and impactful products to market with confidence and efficiency.

We facilitate rapid strategic planning that leads to intuitive UX design, and better collaboration between business, design to development. 

Bottom line. We help our clients launch better products, faster.

Support this channel by buying me a coffee: https://buymeacoffee.com/makotob

Timestamp:
00:00 - Introduction and speaker presentations
05:15 - Defining the AI arms race and key players
11:30 - Technical innovations driving competition
18:45 - Ethical considerations and safety concerns
27:20 - Regulatory challenges and government responses
34:10 - Impact on businesses and consumers
42:35 - The role of open-source in AI development
49:50 - Predictions for the next wave of AI advancement
58:20 - Recommendations for responsible AI development
01:05:40 - Closing thoughts and future outlook

Speaker Bios

Makoto Kern - Founder and UX Principal at IIIMPACT - a UX Product Design and Development Consulting agency. IIIMPACT has been on the Inc 5000 for the past 3 consecutive years and is one of the fastest-growing privately-owned companies. His team has successively launched 100s of digital products over the past +20 years in almost every industry vertical. IIIMPACT helps clients get from the 'Boardroom concept to Code' faster by reducing risk and prioritizing the best UX processes through their clients' teams.

Brynley Evans - Lead UX Strategist and Front End Developer - Leading large-scale enterprise software projects for the past +10 years, he possesses a diverse skill set and is driven by a passion for user-centered design; he works on every phase of a project from concept to final deliverable, adding value at each stage. He's recently been part of IIIMPACT's leading AI Integration team, which helps companies navigate, reduce their risk, and integrate AI into their enterprise applications more effectively.

Joe Kraft - Solutions Architect / Full Stack Developer - With over 10 years of experience across numerous domains, his expertise lies in designing, developing, and modernizing software solutions. He has recently focused on his role as our AI team lead on integrating AI technology into client software applications. 

SUBSCRIBE TO THE PODCAST ►   https://www.youtube.com/@UCfV9ltRpRuggt56dVgZ3o9Q 

LISTEN ON:
ITUNES: https://podcasts.apple.com/us/podcast/make-an-iiimpact-the-user-inexperience-podcast/id1746450913
SPOTIFY: https://open.spotify.com/show/43NxDthLmfF1NFjwedNTmv?si=4223272eba4e4eb8

ADD IIIMPACT ON: 
INSTAGRAM:   https://www.instagram.com/iiimpactdesign/
X: https://x.com/theiiimpact

What is Make an IIIMPACT - The User Inexperience Podcast?

IIIMPACT is a Product UX Design and Development Strategy Consulting Agency.

We emphasize strategic planning, intuitive UX design, and better collaboration between business, design to development. By integrating best practices with our clients, we not only speed up market entry but also enhance the overall quality of software products. We help our clients launch better products, faster.

We explore topics about product, strategy, design and development. Hear stories and learnings on how our experienced team has helped launch 100s of software products in almost every industry vertical.

Speaker 1:

Anything that does involve research into a certain viewpoint, yeah, that's where it gets pretty scary to know how they tweak that.

Speaker 2:

It's way harder to tell is this a neutral viewpoint or is it just coming from, you know, very specific data that's been trained with. That's my kind of problem with it.

Speaker 3:

I guess public understanding of how AI can be manipulated and promoting critical thinking is probably crucial to, you know, resisting any sort of subtle propaganda that could be used with these large language models.

Speaker 1:

Hey, everybody. Welcome back to another episode of, Make an Impact. I'm your host, Makoto Kern. I'm here with our wonderful AI team and integrators, Brinley Evans and Joe.

Speaker 2:

How's it going?

Speaker 1:

All right, we've got

Speaker 3:

some Pretty and looking forward to this episode.

Speaker 1:

Yeah, we've got a couple exciting topics to discuss today. One is going to be AI models and maintaining neutrality, and the other one is the impact of cheaper AI models. So let's get right to it. Brenly, wanna kick us off?

Speaker 3:

Good. Yeah. So a little bit of kind of going back to what happened earlier this year with the announcement of DeepSeek r one that happened in January. And, obviously, it became the most downloaded free application on The US iOS App Store. It even surpassed ChatGPT in terms of download.

Speaker 3:

So amazing. A lot of kind of, lot of press around it and, you know, hype. And, you know, obviously seemed to build an AI model at a fraction of the cost of tech company giants like OpenAI and Anthropic and Google. And I guess these Chinese models are now being integrated into different platforms as well. Don't know if you heard that Lenovo laptops are now actually having DeepSeek built into them, which is interesting.

Speaker 3:

So, you know, sort of start looking at this rollout of all these large language models. And, you know, while it's an amazing development, it obviously goes a long way towards democratizing AI models. It could be a risk that many users aren't thinking about, and that's a little bit of really what I wanted to talk around today. So we've all used, you know, well, most of us, should I say, have used a natural language process or a large language model, And, you know, we can all sort of attest to how useful they are. I know just personally and, you know, between all of us, you know, it helps us work a lot more efficiently.

Speaker 3:

And, you know, outside of the general kind of to do requests that you have or actionable items. You know, I found myself, and a point that we've touched on as well, you you find yourself, especially through these conversational interfaces, engaging, you know, with a large language model about politics or history or philosophy. And Mhmm. You can have really engaging conversations. And you can almost have a rapport that, you know, is very close to a natural conversation.

Speaker 3:

So kinda given this friendly voice and believable personality, it's pretty easy to forget what's going on behind the scenes. And, you know, leading really us deeper down into how the model is trained could really pose a larger global risk if we aren't careful. And, again, don't wanna sound conspiracy theorist because I'm far from that, but it's interesting to know to be cognizant of what could be occurring at the level from, you know, the internal rules in these large language models.

Speaker 2:

So Yeah.

Speaker 3:

We're obviously, on a course to access information faster. We wanna be more productive. And these are great tools for this. So everyone climbs in and goes, well, I can be x times more productive. But what I wanna say is, are we careful that it doesn't lead to a reduction in critical thinking, which I really believe is an important aspect here?

Speaker 3:

You deal with these models that have answers to any questions. You may be someone that checks the answer the first time, maybe use a different model and check it, maybe for the first month, you're fact checking. But a year after that, you probably settled down to a nice comfort zone of, yeah, this is good.

Speaker 2:

A year is a lot. Like, Within ten seconds, I'm like, Cool. Trust this thing fully. I'm good to go. I'm not checking anything here, which is bad.

Speaker 3:

But that's the thing. Only have to check a few things to think, Wow, this has nailed it. This is the knowledge I was looking for. It raised it up.

Speaker 2:

Yeah. Perfect. I would generally fact check against my own knowledge. I'll ask it something and sort of go, Yeah, it's answering as based on what I know of other things, and I can see that, yeah, it's doing well. But yeah, that's about as far as I go.

Speaker 3:

Yeah, well, that's it. And I think, you know, I've also found those sort of initial engagements, you're kind of feeling out, like you're saying, Joe, you're testing your own knowledge and seeing whether it's echoed back or, you know, let's look at a certain political viewpoint or an ethical dilemma. You may be excited to see, oh, no, no, that's it's talking to me in a way that, you know, we're on the same page politically or ethically. So immediately you begin to trust. Now, you know, if large language AI models are just getting easier, like we've seen with DeepSeek and, you know, what was the one like Manus that you were sharing, you know, that does more for you, There's a certain point that you think, well, what stops these from being weaponized or utilized by a certain government or another terrorist organization to really control their narrative,

Speaker 2:

And it can even be even subtle, I presume, too. It's not like outright you ask a question that gives you like, oh, yes, socialist is the best form of government. It's not like being outright obvious about it, but it can be like you're asking something about a certain famous person who's quite controversial. And just the way it answers, it can guide your opinion on that person. It can be like, yes, this is a misunderstood character.

Speaker 2:

He's done a lot for the world. Put through all these initiatives, and he has all these charities. And you're like, wow, this is actually a really good person. Meanwhile, since the AI model has got a either programmed or data bound bias to giving that kind of answer and pushing that agenda, and you don't know it. It's super subtle.

Speaker 2:

It's just giving you information, but it's just omitting maybe everything else. It's giving you a fair sort double like, you know, comparison of of the entire person.

Speaker 1:

Didn't we do a podcast on AI and transparency, I think back then? But I mean, it's factual where deep seek won't say things about Tiananmen Square. No, there is censorship that we know about. We know that like Google and OpenAI models have been shown to be also politically biased as well. And who knows what kind of corporate pressure that, you know, obviously, even Zuckerberg had discussed how he was getting pressure from a social media perspective to suppress certain articles and and push others during the Biden administration.

Speaker 1:

So, I mean, I'm curious to see how models could be tweaked. And for the most part, I think most people are using it for very neutral type of tasks. Especially if you're using like Manus or something that's more AI where it's like, hey, go out and do this for me, buy some tickets or do whatever, where it's actually it's basically neutral. But, you know, anything that does involve research into a certain viewpoint, yeah, that's where it gets pretty scary.

Speaker 3:

You know how they tweak that. You know, if someone's researching something or, you know, if this is the tool for upcoming journalists where you want to reason with it or you want to try and just, as you were talking about, Joe, you want to try and understand a viewpoint, but certain things are omitted. And Makoto, what you say is, yeah, exactly. Like, what's alleged with deep seq and anything that's embarrassing to China is sort of being worked around there. And it's those subtleties that I think are dangerous because it's what I like to call the parable of the boiling frog is that slow change is change that you don't notice.

Speaker 3:

You know, you would notice if, you know, if there were radical differences in view, but to slowly start steering people in a certain direction over many years

Speaker 2:

It's really easy to do.

Speaker 3:

Yeah. It's potentially seeing history in a different light.

Speaker 2:

It does make me think, though, just thinking of this purely from a technology standpoint. Like, we have kind of gone through this before. AI isn't really new around this. If you think back to, like, you know, what did we really do before AI? We used Google.

Speaker 2:

Right? We searched in Google. Technically, no difference there. Right? Whatever Google results come up with, if you ask about a famous person, the results, they'll come up, or positive articles about that person.

Speaker 2:

Very easy to steer narrative there too. So I don't think from a technological perspective, this is something new. I would just But I think it's just so much more well, it's more subtle. Right? Because previously, you would get a whole bunch of articles, and you could look at the site and be like, oh, well, this is a very far right or far left leaning site or something like that.

Speaker 2:

And you can go, okay. Well, I can see what the search engine's doing. It's obviously biased towards a certain viewpoint. But when it's coming from the AI, you can't really see the sources it's pulling from. There's no real transparency in how it's even coming up with that answer.

Speaker 2:

So it's way harder to tell is this a neutral viewpoint or is it just coming from very specific data that's been trained with? That's my

Speaker 3:

Well, I think I was saying I disagreed, and he probably explained a bit there. But it's, at least with something like a search engine, yes, we needed to be trained that not everything is fact that you're looking at. And maybe that's where critical thinking became more important. The difference is, again, that you're relying on something that aggregates everything. So instead of going, Well, out of those five sites, sorry, I could see that these two were left, this was right, this was centered.

Speaker 3:

You can get a balance and make your decision, opposed to you relying on one source now to say, Give me an overview of this, and it's going to give you one opinion. So you no longer have that, oh, okay, maybe that belongs to that news agency, and it's slightly more covered. That's probably where it gets little bit more dangerous. And Yeah. And then you've got to sort of bring up the question of, does the the model with the largest user base have the potential to control the narrative of whoever is connected to that model?

Speaker 3:

So you think cheaper models, oh, well, you know, why why is everyone gonna continue using Claude or ChatGPT when we potentially have a free model over here and it's better than all of those? Everyone's gonna go, well, I don't wanna spend $20 a

Speaker 2:

month on There is a there is a cost to it. It may not be a financial cost, but potentially an ethical and moral or some other kind of mental cost you're paying for that maybe. Yeah. Yeah. Exactly.

Speaker 1:

Yeah. I think oh, I was gonna say, there's a news a new news media that's growing in popularity and I think it's really smart to what they're doing. It's called ground. News and they basically show which article you're reading. It shows the kind of that biasness if it's more left leaning or right leaning or more center leaning in the percentage of how much is leaning, which is really smart.

Speaker 1:

You know, the amount of information we have at our fingertips within seconds is so powerful compared to just five years ago, ten years ago when we were growing up, you know, had to go to library or watch, you know, three news outlets and that's it, maybe the radio. But now you have so much that it's almost irresponsible that you only do your research on one news media article when you're trying to find something that may have that bias. You should really do a little bit more. It doesn't take much. And I see with like I think Grock does it for sure.

Speaker 1:

And I think some of them do cite their sources when you do search for up for things. So that's great. But no, doing something with what ground news does, think is really smart to do, especially when you start to get political or anything.

Speaker 3:

And maybe that's where there actually needs to be some sort of regulatory AI that filters things, you know, where you can say, right, you know, this is the region I'm selecting and the certain guidelines that I'd like it to adhere to and actually acts as an intermediary between certain models and potentially even marking them as, Right, this could be you know, slight this

Speaker 2:

is seemingly a It's still just moving that responsibility to another AI model if that AI model's compromised. It's kind just like moving the goalpost

Speaker 3:

to another AI It's much more difficult for collusion like that, isn't it? So if you had two independently created models, for there to be Yeah. If you know, right, these are have complete different origins, it may You

Speaker 1:

need a governing AI body that watches all the other AIs. That's quite hard

Speaker 2:

to see. AI model.

Speaker 3:

Yes. Yeah. And that's what I'm keen to kind of think about as well. But I wanted to touch on a few of the things that because these sort of manipulations of a model and changing or sort of pushing a certain narrative can have such profound political ramifications, and whether this is domestically or internationally. But obviously, it's unlike sort of a very propaganda, it's not seen that clearly.

Speaker 3:

So it's much more sort of insidious in how it would disseminate the information. So we look at a few of the the ways this could play out. So we look at, you know, subtle mass persuasion and narrative control. So you've got gradual shaping of public opinion. So just by small, small tweaks downplaying historical events slightly more and more or emphasizing sort of, you know, particular ideological perspectives or even just flat out blocking alternative viewpoints, it would be quite

Speaker 1:

easy for

Speaker 3:

an organization or a government to just start slowly steering that public perception over time and also

Speaker 2:

allow them to Sorry, just reminds me of that quote. Sorry, Brent, just because it popped in my head. You know that famous quote, History is written by the Victor? It's kind of like the same thing with like this in a way. It's like the history of a country is now being written almost by the data an AI is trained with.

Speaker 2:

You're just going to rely on that. Yeah, sorry, a bit of a tangent to that

Speaker 3:

correct it's absolutely.

Speaker 2:

It's my head there in a kind of way. And whichever AI models win the AI war are gonna be the victor. Right? And then we're kind of, like, then determining our knowledge sources, you know, based on those vectors. So

Speaker 3:

yeah. And also the fact that

Speaker 2:

Interesting.

Speaker 3:

Just inherently people generally, they hear a fact that don't necessarily fact check it. I mean, the number of facts I can think of that something my my parents would have told me that you question forty years later, you're like, woah. That was

Speaker 2:

that wasn't true at all.

Speaker 3:

But I never even questioned it. And it's those sort of slowly

Speaker 1:

almost Columbus didn't just discover America. Yeah.

Speaker 3:

So, you know, those you know, what what are past as maybe neutral facts would slowly just, you know, become ordinary, become common knowledge, but they're based, you know, on not on the truth at all. And, you know, then slowly shaping global perception from that as well.

Speaker 1:

Do you see a potential need? I mean, agents obviously are the big thing in having personal AI agents doing things for you. But is this almost like a you know how cybersecurity, they have ways to block attacks and things like that? Why can't we create an AI agent that says, well, wait a minute, this article is 35% incorrect or 75% leaning this way or whatever the case is. And maybe just like a quick visual because obviously no one wants to even do the research.

Speaker 1:

So why not just say, hey, this might be false or this might be like a Snopes almost where fact checks everything.

Speaker 2:

Definitely. Yeah. That would be very handy.

Speaker 3:

Interesting. And then another one is political polarization and suppression of opposition. So a couple of things like algorithmic amplification of preferred ideologies, which is interesting. So imagine if the chatbots could slowly start, their algorithm would be like, well, no, no, let's promote content that aligns with, you know, our ruling party's ideology. And then we'll just down rank and not really mention any of the dissenting voices.

Speaker 3:

So that becomes, you know, much more prominent and, you know, people buy into that more. Or even going to silence critics and, you know, political dissidents. So, you know, things like Yeah. If there are opposing viewpoints, it labels them as misinformation or extremism and, you know, just sort of elevating those data proof narratives.

Speaker 2:

And I guess the tricky thing with this too is, like, you know, a new model comes out, you start using it, everyone, you know, rates it highly, it all looks good, great, but, you know, very slightly over time, it can also change. It doesn't necessarily have to be from the get go or something very obvious, but, you know, over the years and years, as years goes goes by and they update the model, they're just kind of tweaking it over time. And Mhmm. Before you know it, like, you know, it's sort of Yeah. Yeah, got a an entire agenda to it that, yeah, you just couldn't tell in the beginning.

Speaker 2:

Like, that could be happening to chattypi tea right now. Like, how can anyone really easily tell?

Speaker 3:

That's it. It's it's fascinating. And the more you think about it, you're like, yeah. I probably need to be a bit more critical in looking at anything that has you know, other than, you know, any viewpoints, I guess.

Speaker 2:

I almost think that if you're asking, like, a question which is a very factual safe question, like, how do I change, you know, my car battery, that's fine. But I think I always feel like any other question that gets asked to an AI, like, what I think of this political party or this person, I think it should never ask answer that question itself based on its own data. It should always pull in from the web data sources and then show you a list of all the sources it pulled from, like chapter GPT does right now. If you ask it a question on search, it can show you all the sources. But at least then you're kinda seeing where it's getting its information from.

Speaker 2:

But, again, there's a cost in the time cost that's obviously, that exponentially raises the cost, but that that's the only kinda way I can see it being something you can a bit more rely on. Even then, though, the problem with that is you you it puts the onus on you to, like, check through all those sources for every single question that you ask. And sure, everyone on this call, you know, may do it, you know, maybe we're those sort of people, but, yeah, probably, like, 80% of people would never go and check those sources and just go, yeah. Sure. I'm sure it's fine.

Speaker 2:

I can see, you know, microsoft.com there, so it must be fine. Right? But meanwhile, it's, like, cherry picked, you know, four viewpoints on something from, you know, Microsoft News compared to the other 50, which were a different viewpoint, but just cherry picked them. You think, oh, it's from, you know, MSN News. It must be fine.

Speaker 2:

But meanwhile, no. But yeah.

Speaker 1:

Yeah. I think, price is gonna be an important thing to all this. Obviously, it's great to have competition because that drops the price supposedly, you know Yeah. If DeepSeek really built everything without using the AI chips they don't have, you know, that's something where I think if, you know, with browsers are free, so why not use the free ones? Why pay $20 or a hundred dollars a month for it if somebody's offering it for free?

Speaker 1:

So people are generally just gonna jump to that. Just like apps. Apps, you used to cost something, now you just download it for free. And then as you use it, then you get charged. So

Speaker 2:

Yeah.

Speaker 1:

Is that the model everybody's probably gonna go to? I would assume so.

Speaker 2:

Mhmm. Yeah. I mean, I think there'll be a different drive to having model on devices, like, you know, Bryn mentioned earlier, like just being bundled in with your computer, not, like, out in the cloud. There's a few good reasons for that. Right?

Speaker 2:

From a security perspective, it's great. You answer your questions that you're asking on going out to some cloud and being trained and used for every reason. Way faster, way cheaper, it's not costing you anything. You can throw a million questions out today, and you just, you know, a day, and you're just limited by your own hardware. And it can't be manipulated either, right, if it's local on your machine.

Speaker 2:

It can't be like, you know, you go to some site that says, oh, we use the deepseq model in the back end, but you don't really know what model they're using or what they've done to it or what middle layers they have between you and the model. So I think if this becomes a bigger problem over time, definitely see a shift from people wanting to use local trusted models rather than cloud models, which they just sort of are either too expensive or, yeah, it's just way safer and quicker and cheaper to use.

Speaker 3:

Yeah. Agreed.

Speaker 1:

One more thing. I was just saying I'm curious with the whole obviously with this kind of being an arms race for sure. We've got a Stargate project, which Trump announced about the 500,000,000,000 in AI that they're gonna start building, you know, facilities there. NVIDIA said they're going to invest in The United States as well and build chips here. I'm just curious like that.

Speaker 1:

Are other countries going to just start developing their own chips or own places where they're just building up these areas of just massive AI data training centers? That's gonna be something too.

Speaker 2:

Yeah. Yeah. Exactly. I mean, I think the the biggest limiting factor right now are the chips themselves. And that's why the SteepSeek model was so, yeah, like, industry rocking because previously, you know, you'd have to have 500, you know, NVIDIA chips, for instance, like, train a model over, like, five days, and this was able to do it on, like, you know, five chips over, you know, few days and, like, so substantially cheaper.

Speaker 2:

But the chips themselves are still, yeah, like the biggest part of this whole equation around which countries can compete, who can build the data centers. Yeah. Sure. You may get cheaper models that need less chips, but as AI grows, you know, it's just gonna grow with it no matter what. This may have opened up the gates to more AI and cheaper AI, but the demand is still gonna be there, those chips are still gonna be needed.

Speaker 2:

Yeah. That's like a separate podcast in itself. There's only, like, two or three companies in the world that produce chips capable of, like, you know, really good AI computation, and so it controls those companies, kind of will control the data warehouse market and computational market for them, which is why NVIDIA share price skyrocketed. You know? Obviously, it's going down now.

Speaker 2:

So the discussion Then

Speaker 1:

it dropped

Speaker 2:

according to the news. Yeah. Dropped. Yeah. Exactly.

Speaker 2:

But that's why, because they were one of the only few companies that could actually do it. I think from a competition standpoint, looking at a country perspective, yeah, it's really if you're a company like Germany and you wanna compete, for instance, you know, you're gonna have to build your own entire chip making industry, but all that proprietary knowledge and chip making is hard, and it's not easy, and

Speaker 1:

it's like a new oil.

Speaker 2:

Have to invest a significant amount of their GDP to compete. Basically, it would have to be like a country's entire focus to actually compete at this point because it's such a hard industry to get into. Mhmm. Mhmm.

Speaker 3:

So I guess to wrap up, there are so many points we could look into, but how could we prevent this? You know, what are ways that really from a technology standpoint, we could move forward and prevent this sort of thing from happening? And a few of them could be things like open source models or allowing third party audits to verify that they aren't biased, so some sort of at least AI transparency to open auditing there. Also, just not allowing any single government or company to have absolute control over a widely used NLP, really trying to decentralize the AI development around that.

Speaker 2:

Thinking about that, could you imagine a feature where a government may say, These are the only approved AI models you're allowed to use in this country. All others are banned. I could see it going down that road, know, if this whole sort of potential worry about bias heats up.

Speaker 3:

Yeah.

Speaker 2:

Yeah. Yeah. Definitely. Interesting sort of thing to happen.

Speaker 3:

There could be ones. I mean, you see models now. I forget which one being sort of blocked in certain knowledge, whether it's China, but certain models being blocked for use there. So, yeah, it's definitely a reality that we could be looking at.

Speaker 2:

Yeah. And it would be an easy thing for a government to do too because they could be not easy, but like an excuse almost if a government's sort of compromised by private industry and private industry recommends, oh, let's make this big, you know, news stink that this AI model isn't telling the accurate history of our country, and it should be banned. But meanwhile, it was banned purely from a competitive point of view. It's a competing model. So, like, you know, for instance, the banning DeepSeek, right, because they say, oh, we asked these questions and it had a bias, and so we don't want models that have a program bias to be used by our population.

Speaker 2:

So we're banning it, but meanwhile, was really just because it was cheaper.

Speaker 1:

Yeah, I think, you know, what X does as far as like the feedback that readers can give, they can kind of say, this is a false narrative or whatever. At least you get some real time somewhat feedback. Obviously, you could probably game that system a little bit, at some point you can see like, hey, this is not true, even though this was referenced by this post.

Speaker 3:

Yeah, absolutely. Think going back to what we're touching on earlier is really a sort of global AI ethics agreement. So, you know, what sort of international coalitions could we have that could create rules for, you know, maintaining AI neutrality, you know, similar to even human rights agreements? And then, you know, lastly, just having

Speaker 2:

Definitely.

Speaker 3:

Good user awareness and education. So I guess public understanding of how AI can be manipulated and promoting critical thinking is probably crucial to resisting any sort of subtle propaganda that could be used with these large language

Speaker 2:

models. Mean, promoting critical thinking is something that should be happening now anyway. Right? Because, you know, even to social media, it can be so easily influenced with algorithms as it is now. Jeff Fields

Speaker 3:

I think there's a growing movement for promotion of critical thinking, but I still think it's a long way off and should be built into a lot of school syllabuses just to

Speaker 2:

I totally agree. Be able to

Speaker 3:

function in a world full of different opinions.

Speaker 2:

Yeah. If was up to me, there should be, yeah, literally a class in school that every day teaches you how to navigate the Internet and information and understand it and process it and think about it in a in a way, not just consume it and take it in and Mhmm. Actually think about it and have tests where, like, you sort of test kids against, you know, obvious bullshit kind of headlines and see if they can figure it out that it's true or real or fake and how can they do that.

Speaker 1:

I mean, you could almost, like, crowdsource that, like audit challenges. You crowdsource it to people who can actually, you know, you make it fun where they could go in, they can check to see how things are actually real or if it's true or whatever. It may be faster because I think one of the challenges you might find is with the regulatory audits, you know, government, as we see, moves very slow. And so are they going to move slower than the actual algorithm updates, which is probably true. So there's got to be some ways in which those kind of regular audits up to keep up with tech.

Speaker 3:

I suppose watch the space and just see what happens. Yeah.

Speaker 2:

Yeah, it's cool.

Speaker 1:

Joe, do you want to jump to your topics or?

Speaker 2:

Yep. I don't know if we actually have time to go through it. I guess I could rapid fire it,

Speaker 1:

but Let's try. We could go we could continue if we need more time.

Speaker 2:

Yeah. I still have they looking listening to me, Britney's conversation, that was such an interesting one. Don't know if mine's kinda boring, to be honest, in comparison. Mine's just like a business one. I don't know.

Speaker 2:

It doesn't it doesn't even feel like I even wanna talk about it. It's just like just business stuff. It is his note, obviously, cut all this part of the podcast out.

Speaker 3:

Otherwise, viewers are gonna be very or listeners here. Yeah. Very confused. What what

Speaker 2:

what I think I don't know. I think let's leave it there. I'm not feeling like a mindset interested in I

Speaker 3:

mean, we probably got a good amount with that.

Speaker 1:

It was good thirty minutes.

Speaker 3:

I think it went nicely with some nice just flowed, I think.

Speaker 1:

Yeah, I think we just got to do these more. Yeah, mean, haven't met in a while so that kind of helped. You know, we haven't talked in a while, but obviously to keep up with everything, it's great to talk more often than once every three months.

Speaker 2:

Obviously we have

Speaker 1:

The other thing that I've been doing is launching like five minute clips of all our past things just to get the editor So that actually has been gaining a little bit of traction. I've gotten actually more subscribers doing that. Not a I mean, you're talking, yeah. I mean, not a lot, but it's still like there's an actual quick uptick. It went, you know, 10 subscribers in a week versus, you know, 10 in a year.

Speaker 3:

Yeah. There What are sitting on at the moment?

Speaker 1:

200. I mean, it's still low.

Speaker 2:

It's pretty good.

Speaker 1:

Yeah, we've I think we've hit over 10,000 views, but it's interesting to see. I mean, I've tried to tweak a lot of things with just, you know, the algorithms. I'm using different software to try to do SEO type of things.

Speaker 2:

But Yeah. I also think that if you wanna drive engagement, I almost feel like the business stuff that we talk about is just gonna drive people away. Generally, it's not, like, sort of appeal.

Speaker 1:

So I think that that covers our topics for today. Thanks everybody for joining in. Like and subscribe to our channel and, looking forward to our next conversation. Take care, everybody.

Speaker 2:

You have a good Awesome. Thanks. See you.

Speaker 1:

Bye.