Make an IIIMPACT - The User Inexperience Podcast

Hey there listeners, Makoto here, and welcome to another insightful episode of Make an IiIMPACT! Today, we're diving deep into the world of AI integration and exploring the five crucial gotchas you need to watch out for. 

Joining me are my co-hosts Brynley Evans and Joe Kraft, who bring a wealth of experience and knowledge to the table. 

In this episode, we'll be discussing a range of topics that are essential for anyone looking to harness the power of AI in their software.


First, we'll discuss the concept of tokens as a measurement of AI currency and the importance of cost management when using tokens in AI models. We'll also touch on the cost structure of using AI models like Chat GPT and Microsoft Copilot and emphasize the need for budgeting and monitoring token usage to avoid unexpected costs.

Next, we'll delve into the crucial role of data diversity in preventing bias in AI decision-making, and the necessity of monitoring and auditing data sources to ensure transparency. We'll also discuss the significance of inclusive design in avoiding biases and the concerns surrounding ethics, transparency, and bias in AI developments.

We'll then move on to the topic of adversarial attacks and how AI models can be tricked by intentionally designed input data, leading to misinformation and affecting business operations. We'll draw comparisons to jailbreaking and discuss the threats posed by manipulating data sources and introducing malicious data during AI model training.

Furthermore, we'll explore the risks of model inversion attacks, where attackers can reverse engineer and infer sensitive information from the training data of AI models created by other companies. We'll also discuss the importance of privacy and data protection when using AI services, and the potential for bias and discriminatory patterns in AI models if left unchecked.

Lastly, we'll touch on the significance of quality training data, the challenges of caching answers for repeated questions, and the use of hybrid models to balance scripted flows with AI responses. We'll also briefly discuss the role of tokens in managing GPU resources and the security and privacy considerations related to AI model storage and data access.

So, buckle up and get ready for an informative and thought-provoking episode as we navigate the complexities of AI integration and uncover the five gotchas you need to be aware of. Don't forget to like and subscribe to our channel, and let's dive in!

Timestamps:
00:00 Language model tokens represent text currency.
05:34 Budgeting and monitoring AI usage is crucial.
08:43 Token cost tied to processing, infrastructure, prompting.
11:00 Major services don't seem to offer caching.
14:18 AI security concerns and specific gotchas discussed.
19:51 Manipulate data sources to gain business advantage.
22:40 AI model inversion attacks, reversing sensitive data.
24:52 Caution and privacy critical with using AI.
27:08 Neutral hiring process removes human biases. Ads emphasize fairness and efficiency.
33:22 Quora faces backlash for using AI. Ethical challenges.
34:18 Sourcing trustworthy data for AI models is crucial.
39:55 Ethics in AI, need for slower transition.
40:51 AI competition driving companies to invest heavily.
44:40 Ethical dilemma: human vs AI decision-making.
48:00 AI integration requires education for unlocking potential.
52:50 Software maintenance and scalability are crucial.
55:04 Create flexible tooling to pivot and adapt.


You can find us on Instagram here for more images and stories:   / iiimpactdesign  

You can find me on X here for thoughts, threads and curated news:   

 / theiiimpact  


Bios:

Makoto Kern - Founder and UX Principal at IIIMPACT - a UX Product Design and Development Consulting agency. IIIMPACT has been on the Inc 5000 for the past 3 consecutive years and is one of the fastest-growing privately-owned companies. His team has successively launched 100s of digital products over the past +20 years in almost every industry vertical. IIIMPACT help clients get from the 'Boardroom concept to Code' faster by reducing risk and prioritizing the best UX processes through their clients' teams.

Brynley Evans - Lead UX Strategist and Front End Developer - Leading large-scale enterprise software projects for the past +10 years, he possesses a diverse skill set and is driven by a passion for user-centered design; he works on every phase of a project from concept to final deliverable, adding value at each stage. He's recently been part of IIIMPACT's leading AI Integration team, which helps companies navigate, reduce their risk, and integrate AI into their enterprise applications more effectively.

Joe Kraft - Solutions Architect / Full Stack Developer - With over 10 years of experience across numerous domains, his expertise lies in designing, developing, and modernizing software solutions. He has recently focused on his role as our AI team lead on integrating AI technology into client software applications. 


Follow along for more episodes of Make an IIIMPACT - The User Inexperience:    / makeaniiimpac..  .

Keywords:
AI ethics, Global AI standards, AI societal impact, AI workforce displacement, AI competition, AI bias, AI legislation, AI decision-making, AI integration challenges, Data quality, Data preparation, Markdown format, AI change management, AI education, AI training, Digital transformation, AI skepticism, AI maintenance, AI scalability, AI tokens, AI cost management, AI models

What is Make an IIIMPACT - The User Inexperience Podcast?

IIIMPACT is a Product UX Design and Development Strategy Consulting Agency.

We emphasize strategic planning, intuitive UX design, and better collaboration between business, design to development. By integrating best practices with our clients, we not only speed up market entry but also enhance the overall quality of software products. We help our clients launch better products, faster.

We explore topics about product, strategy, design and development. Hear stories and learnings on how our experienced team has helped launch 100s of software products in almost every industry vertical.

Speaker 1:

Everybody, welcome back to another episode of, Make an Impact podcast. I'm your host, Makoto Kern. I've got Brinley Evans and Joe Craft with me again. Hey, guys. Hey.

Speaker 1:

So Good to be back. Yeah. So today, a topic that we're gonna be discussing is the 5 gotchas when integrating AI into software.

Speaker 2:

And that's a So

Speaker 3:

that's a good one because I think we're we're kind of having a chance on on all the the projects we've worked on to add a little bit of of insight into what we're busy with and and what we found as interesting potential gotchas as working through kind of a lot of, early technology, watching the the tech advance, and just being able to weigh in on, well, you know, what is it you should be looking out for. So, yeah, looking forward to going through this.

Speaker 1:

Billy, would you like to kick us off with the first topic of, token cost management?

Speaker 3:

This is a fun one, tokens. Tokens is I always struggle to get a a grasp on it because it's sort of this, this bizarre, sort of measurement of, you know, of of AI currency in some way. And I thought, let me just start off and and kinda level set with tokens. So for any of our listeners that are still thinking, what is a token?

Speaker 2:

I was

Speaker 3:

trying to come up with some analogies that would help everyone put a or at least get a handle on these. One of them is really thinking, like, tokens are almost fueling the AI engine. So it's kind of like just a car that needs gasoline to run. Obviously, AI models need the tokens to process the information and generate the the results. And they're really kind of fundamental units that, measure the amount of data processed by an AI model.

Speaker 3:

So, you know, that's the sort of count of the pieces of text, whether it's words or characters, that the model reads or generates. So if we look at things like, well, how do tokens work? So imagine them almost like digital currency. And so each time you make a transaction and transactions are sort of the inputs or outputs from the model, you spend a certain amount of this currency. So in large language models or language models in general, a single token represents as little as one character or as much as a word.

Speaker 3:

So it's not a finance of grasp, which makes it sort of tricky to get a handle on. It's very much down to that specific text. Looking at, you know, what are the gotchas then when you're building this? Well, something you should do if you're building AI into a project is look at cost estimations. So how many users are you going to have?

Speaker 3:

How many tokens are you going to be spending? And there are some quite handy, tokens for the calculations. If you're sending specific requests to the model or information, you could actually get a handle on how many tokens you'll use. So sort of coming up with an estimation and a sort of budget, I guess, allocating a budget to say this is how much we're expecting to use as our user base grows, very important because it's all good saying, right, we want to do this, but we realize that this is going to burn through thousands of tokens per user, and, you know, we're looking at getting over a 1000000 users. It's going to get expensive.

Speaker 3:

And that's where you can look at, well, maybe there are other ways of doing this. So like we've chatted about before, small language models and things even some of the large language models that you can run and self host could be an option in this regard to really sort of minimize the amount you're actually spending on that.

Speaker 1:

Quick question on that. So does, like, OpenAI Copilot, do they charge per do they charge, like, a token based system when you're using their models? Absolutely.

Speaker 3:

So any of the, yeah, the models will have a set cost, and that cost will I mean, Joe, we've done a lot of kind of looking into the costs. I know you can kind of add a lot here, but just looking at at CHAT gbt 4 to 3.5 turbo to 4 0, the costing is so different. And those each of those kind of have a a sit cost.

Speaker 2:

I was going

Speaker 1:

to say, do you have a rough idea of what that price may be? Are they given a discount for a 2 for 1 sale or anything?

Speaker 3:

4 0 is a 2 for 1 sale. That was the 2 for 1 sale. That cut it in half. Joe, do you can you remember those figures? Because we've been looking at them just last week.

Speaker 2:

It's from cents in

Speaker 3:

the dollar to micro cents.

Speaker 2:

Yeah. It's like 0.03¢ now. It used to be 0.06¢ per 1,000 tokens. So they have kind of like a scale how it all works out. But I think I think it is important as a business, especially, to understand these costs.

Speaker 2:

As a user, you know, someone just using Chat GPT or or Microsoft Copilot or any of the kind of public services out there. You generally just charge a flat fee, and they're trying to sort of, you know, guesstimate how much general users start using applications. And if you're a business, I think the the more interesting cost is okay. As your costs scale up around your token usage, if you're giving your users, you know, the ability to generate a lot of tokens, then how do you wrap that up and and sort of, you know, make that a cost for your actual end users? You don't really want to push it on to them.

Speaker 2:

You kind of wanna make your costing simple again, like wrap it up into a a certain amount so they can go, okay, I'll budget that we would be paying for your service would be, you know, $200 a month. But what do you do then as a business to make sure that doesn't scale out or that you're not aware of what they could be doing? And I don't know if you have any ideas. I have a few of myself, Brendon, but I don't know if you have some points ready that you you wanna go over around that or if you have any thoughts on currently what are the best ways to sort of manage those tokens.

Speaker 3:

Yeah. I mean, the only the only point that I really wanted to stress here and maybe one I'll jump into now is is kind of monitoring of that and budgeting. So if you've come up with that sort of set budget, you've got an idea of your projected token usage, then you've got to be properly monitoring and setting alerts and setting budget caps. So you can imagine a scenario that I was looking at here is imagine when we've seen so many AI startups. So, you know, you've got this great idea.

Speaker 3:

You're using, let's say, OpenAI, using another one of the big large language models. You get going with it. You integrate it into your application. You you see that users love the the AI features. You know, using your app engagement is is skyrocketing.

Speaker 3:

Then one day, you you wake up to an Azure bill of of 50 k overnight, and your start up's in in serious problems. And, of course, it it could have all been prevented with some decent budgeting and, you know, setting kind of caps in services like Azure. But, you know, they you can't really stress enough how important that is because, you know, if you've got a minor glitch in your app's code and it's triggering a loop and that, causes Selene to hit the AI service repeatedly, you would rack up millions of tokens in a few hours. And if you're not monitoring it closely, you don't have a tight budget set, then you're getting that 50 ks bill or more, which is not something you want to be dealing with. So I think that's easily avoided, and it's just through proper cost management solutions and setting those budgets.

Speaker 3:

So that's the main point that I wanted to drive home. I think under token, cost management is around that. Joe, I'm sure you got some points to add to that.

Speaker 2:

Yeah. Yeah. I can talk a few things there that you can do to manage that further or at least to reduce them. And I think it's also important to realize, as you said earlier, with the model changes, the token costs are going down dramatically. So competition in the AI space is huge right now.

Speaker 2:

All the major players are all competing against each other. That's why the token costs keep getting lower and lower. So maybe in, like, a few years, they'll be so, you know, minute. So it's it's almost, you know, you know, anywhere you live in cost overall and if you're you're scaling out to 100 of thousands of users, it may not even be such a big thing in a year or 2. Right now, it definitely is.

Speaker 2:

So it's it's worth watching it, but, yeah, we'll see how that space works. The other side of it too is we're kind of talking again we're talking about, token costs specifically around cloud services. So for instance, Azure OpenAI or OpenAI Direct, you know, they are charging on their API usage per token costs. But as a business, if your if your infrastructure is large enough, you can always self host these models. He talked about, you know, the small language models and things like that you could self host, but you can even host the full, larger models, like, you know, Llama and Cloud and those sort of, you know, obviously, OpenAI is proprietary, but there are open source alternatives out there that you could self host.

Speaker 2:

The what token cost is really actually coming down to is the the processing, right, to actually get that response out of the model. That's how you know, that's that's bold in tokens. So your processing costs will then become your main consideration. Right? So if you're self hosting that model in your own cloud infrastructure on prem, then you need the hardware and the capable hardware to sort of, you know, get those responses quickly.

Speaker 2:

So your your costing will change or the way you'll think about will change from, okay, around tokens to more around, okay, what's what's the infrastructure cost around this? That's a separate sort of way you can look at it too. And then thirdly, really just designing your your AI in a way that your token costs don't scale up crazily. Right? And that all comes down to prompting and what you're really building your, your AI model for.

Speaker 2:

So if it's, you know, an AI that's answering user questions, you can prompt it to say, you know, be succinct, be short. So we can even cap out that token. So, eventually, as part of the APIs, you can make sure they don't go over a certain limit. Like, there's a hard break point. So you can try and structure your problems to make sure that at least responses going back to users are are synced, and it's not trying to generate, you know, a a book of information for a simple user's question, which is a big thing.

Speaker 2:

And then the second part of that is making sure that your AI is is engineered just for that purpose, and users aren't trying to bypass that. So if you have a an assistant that's just answering questions, you want to prompt it and engineer it that just does that. You don't want to use this to go, oh, cool. They've just given me this whole language model. Let me use it for my own means, and I can start asking to write me books or, like, you know, summarize this 10 page document for me and then sort of tasks outside of what your model is originally designed to do, which would then obviously use way more tokens.

Speaker 2:

So so those things too can be

Speaker 3:

Car dealer

Speaker 2:

what was that car

Speaker 3:

dealer, scenario where a car dealer implemented chat gbt and someone asked it to write Python for them?

Speaker 2:

Yeah. Yeah. Exactly.

Speaker 3:

So there's a great example. You don't wanna do that to someone on the side. They're like, wait. We're supposed to be selling cars, not, not writing Python books.

Speaker 2:

Exactly. Yeah. Pretty funny.

Speaker 1:

If your AI, let's say, answers the same question over and over again, is that something that you can cache the answer and it doesn't have to pull from the the AI, or is it doesn't matter?

Speaker 2:

No. Unfortunately, right now, I think maybe there's services out there that do that, but at least the the major ones that I've been using don't seem to offer any sort of caching. I think that'd be something that you would, once you as a middle layer, implement yourself, and you would then cache the questions that are coming in, cache the responses. That's something that you would self manage. And then if you see a question come in that you see has been answered, check the cache.

Speaker 2:

If we have a hit that matches it, just return that. The the tricky thing with language models is is that everyone sort of phrases questions differently. And so from a caching perspective, it's quite tricky to match a question to an answer. You may say, you know, what are your opening hours this weekend to a store? But Billy, Mike, I want to come this weekend.

Speaker 2:

Are you open? And so from a caching perspective, those look like very separate questions even though they're the same. So it'd be hard to get hits and make sure that gets returned. You'd almost need AI to get the, like, the inferred meaning of those 2 questions, understand they're the same, understand that this response match both. So right now, there's nothing out there that I think that'd be great.

Speaker 2:

So, I mean, that's that's would be awesome

Speaker 3:

to see

Speaker 2:

how they implement things like that.

Speaker 3:

That's where we're seeing sort of hybrid models as well. We can take something in a product like Microsoft Copilot and actually use sort of scripted flows for that and sort of have a balance between, like, do you have a scripted flow? If so, you know, you're still gonna get charged a bit in terms of messages, which are kind of, you know, Copilot's currency in some ways. But, you know, you you probably can at least have a scripted response with a whole lot of different trigger phrases and not have to, you know, use tokens, but it's just one approach to it.

Speaker 1:

Yeah.

Speaker 2:

Yeah. Definitely.

Speaker 1:

Did you in your obviously, blockchain and crypto is kind of a a bigger thing with Bitcoin happening that just happened recently in April. And, you know, I'm just curious if there's any have you seen anything or have read anything about the the token management and the crypto coins that are involved with that by any chance?

Speaker 3:

That's not something that I've I've dug into yet, but it would be interesting to to see.

Speaker 1:

Yeah. There's been some, you know, dabbling that I've done, but I've seen a lot of AI coins and just a lot of it is, one of them that's really popular is called render. You know, they they they're almost like an Airbnb for the CPUs or GPUs, so you can basically kind of, like, rent out the the certain GPUs

Speaker 3:

Interesting.

Speaker 1:

If you need it for yeah. So it's a pretty good business model, but I'm I'm just wondering because I know there there are other I think there's, like, Ocean and and some other coins that basically help the management of that. So it's just it'd be interesting to dive deeper into some of that

Speaker 2:

and how it's being managed.

Speaker 3:

Yeah. We definitely need to look into that. Yeah. Sounds good. But I don't know if you that was pretty much everything I had for the the the first gotcha on our list, first of the 5, which was total cost management.

Speaker 1:

Let let's look at the second gotcha. That's security and privacy. Joe, you wanna take this away?

Speaker 2:

Yeah. Yeah. So for this one, there's it's a it's it's an interesting discussion because it comes up quite a bit, especially around both those topics, security and privacy. But I think from so what I want to try and do at least in this side of things is, you know, there's been discussions before around that. Really, security comes down to, you know, where where are my models being stored, who has access to them.

Speaker 2:

You know? Is is that if I'm using OpenAI, is my data being sent up to OpenAI? So those are kind of, like, the common questions come out coming out right now, and there's all ways that, you know, those can be answered. But I looked into this a bit, and there's a few very specific AI kind of kind of gotchas in a way that are interesting to think about because it's a space in itself. So what I've seen again, just going back, a lot of the security questions that I've seen come up are kind of standard IT questions in a way.

Speaker 2:

There's it's not a new space. It's like, who has access to this? You know, where are we storing, like, you know, the the endpoints to access this? And these are all kind of, you know, whether it's a model or a database, it's the same sort of thing. You know?

Speaker 2:

Where's your data going? Do you have access to data? Are you sure you're okay giving that third party your data? What are their terms of how they manage that? You know, these are common things being around in IT for years.

Speaker 2:

But I've got a few here which are kind of specific to AI, and I thought they're they're pretty interesting. So I'll read out the description of what each of these are. And then if you guys have thoughts on that, you can you can let me know. And it is interesting. So the first one around security is something called adversarial attacks.

Speaker 2:

And what it says is that AI models can be tricked by adversarial examples. So input data that has been intentionally designed to cause a model to make a mistake. So, you know, for instance, you could ask the model questions that manipulate it to actually answer incorrectly or to make mistakes or to give incorrect answers, which then, you know, as a malicious user, you could then use as, you know, sort of ways of trying to get around certain systems or or state that the information that you received in that company was incorrect and, you know, say it affected your business itself or something like that. So it's those things to take consideration.

Speaker 3:

Yeah. Is that the same as jailbreaking, you know, in terms of trying to get

Speaker 2:

around the car? So It is. It's very similar. Right? So it's it's basically that's exactly what happened to that car manufacturer.

Speaker 2:

Right? So someone got around it, made it do things it wasn't meant to do. In that case, they were just sort of messing around with it. But I think, one, someone else paid with it and actually convinced it was entirely different car company. And they, like, were booking in their car for a service, which was, like, you know, a Honda dealership, but their car was like a Tesla.

Speaker 2:

And so it was like it was like, sure. We'll take in your Tesla. We service Teslas all day. And so, yeah, just completely confusing the AI and and sort of you know? And if the the bigger problem with that is that if that's integrating with your back end systems, especially, then it might be feeding data into your back end systems, which is totally corrupt, misaligned, or, you know, just, you know, creating, you know, maybe cheap entries for something that could have cost a lot.

Speaker 2:

So, yeah, it's an interesting one to think about, and that's very specific to to AI models and how they understand what you as a user want.

Speaker 1:

Yeah. Especially around if somebody's trying to hack into a system. Can you I'm curious of how how, I guess, viral something like that can take because, obviously, you have your local or your agents or whatever that's current for your business. But does that somehow get back to, let's say, OpenAI where somebody's sitting there and training and training this this AI who has malicious intent to say, oh, when somebody's doing this to hack into our or go go into our system, just ignore it. It's it's a false positive or whatever.

Speaker 1:

Yeah. Does that train a larger language model into that somehow, or does it just stay within the agent, or does it just depend?

Speaker 2:

Yeah. There's there's loads of strategies around that defense against that. I think the biggest ones are adversarial training, which is really, as you were just mentioning, they're sort of training the model to understand what looks like malicious, malicious inputs and either not processing them or just blocking them entirely. Or you could have those before you even reach your model that sort of goes through those inputs, what people are asking it and actually, you know, even goes out to a separate model and maybe, you know, determines the intents of that and if it looks like it's matching up with sort of malicious training that I'd seen in the past. So, yeah, it's it's all just about making sure that what goes into your model, you've taken a good look at, you know, just letting anyone do whatever they want, or send through anything that they want.

Speaker 2:

And I

Speaker 3:

know we've looked at a few of those Microsoft services that, as you say, Joe, act as layers to basically even before you're interacting with the model to say, all right, does this have bias? Does this have prejudice? Does it have malicious code? If it is, okay, it's not stopping right there before it's even getting to the model and then even coming back from the model and say, alright. Is this is this answer meeting all the specific criteria from a safety and and sort of, ethical, I guess, standpoint,

Speaker 2:

which

Speaker 3:

is interesting. So I

Speaker 2:

can go through to the next one, which is model poisoning. So this is interesting too. So during the training phase, attackers can introduce malicious data that poisons the model, leading to compromised performance or biased outputs. So that is extremely interesting because if you know, let's say, that OpenAI is scraping Stack Overflow, for instance, to get all its trading material, you could very easily, you know, come like, introduce complete malicious data into that or trying to think of a better example, or let's say that, you know, it's it's scraping a demographic database on, let's say, people and their their financial status, and you could go and manipulate that to make certain groups of people look like their financial status is different to reality. Some weird malicious thing like that.

Speaker 2:

I'm sure you can think of many examples where it's basically

Speaker 1:

Medical records.

Speaker 2:

Yeah. That's a good one too. Medical record. Yeah. Yep.

Speaker 2:

So anything that you feel like your business, let's say, could get an advantage of if OpenAI thinks a certain way on that data. So let's say there's let's say something simpler like shipping lanes. Right? You could if you know that OpenAI is scraping certain shipping databases and if you know that you can somehow get into that and manipulate it, you can make certain lanes look extremely expensive according to model, and your lanes look extremely cheap and efficient and the best lanes to use. And so when someone asks, I want to ship my product from Australia to Amsterdam, you know, what's the best shipping carrier to use?

Speaker 2:

Oh, certainly yours is getting recommended as, oh, this is known to be, you know, the the most efficient and cost effective shipping service to use or something interesting like that. So it's all about knowing and and figuring out where the models are pulling their data from and manipulating

Speaker 3:

at that point. But that's manipulating the the data that is being scraped from? I mean, it's scraped. So at a macro level, you're kind of going out and you're you're adding content onto the the Internet to Mhmm. Skew the interesting.

Speaker 2:

Yep. And, I mean, I'm sure that's even being done at a global level. Yeah. Like, you know, politically, you know, I don't wanna go tangent into the into the into that side of things, but, you know, very easily, you know, you could can have a load of comments on all the social media sites with a certain political perception. And the model then

Speaker 3:

Interesting.

Speaker 2:

Starts thinking, you know, if if Facebook's trying to train their models and all their users, it might go a certain slant that, yes, most users agree with this political stance. And, you know, that's just what it thinks, but it's not true at all because the source data's being manipulated. Yeah. So interesting consideration and specific to that. Yeah.

Speaker 1:

I don't know I don't know if you remember. I'm sure people all know black hat SEO techniques to trick Google's algorithm to get ranked, whether that's, you know, same font color as the background so you don't see all this big page of keyword loading and things like that. I mean, is it where I could start talking bad shit about another company? Oh, they're a horrible company. You don't wanna deal with them, and it's all just hidden within, like, small files that no one can see.

Speaker 1:

Yep. But somehow it influences the search results in AI and Google. Yeah.

Speaker 2:

Yeah. That's all

Speaker 1:

I can say. Well, isn't

Speaker 3:

this the next it's the next front of, optimization, I guess, that we're going into? And we're in those early days by the sounds of it, where you can influence things. It's gonna be interesting to see what policies are put in place. I know there are a lot of different governments working, a lot of different policies to prevent things like this. So it's

Speaker 1:

a it's

Speaker 3:

a space for to

Speaker 1:

get into that into some ethical discussions.

Speaker 2:

Yeah. Exactly.

Speaker 3:

Well, that's our next gotcha. But I don't know, Joe, if you're finished because that would be a sweet segue, but I'm sure you got a few more.

Speaker 2:

I got I got 1 or 2 more, just one more maybe, which is kind of interesting because, again, it's very specific to the AI space, which is what the examples I'm trying to come up with outside of just general IT security. And this is called model inversion attacks. So attackers can use the AI model to reverse engineer and infer sensitive information from the training data. So what that means is that if you're able to get a model that another company has created and they created it from sensitive data or, you know, database specific to their business, for instance, that they don't necessarily want everyone to use, but they want it in the model. And maybe they felt that they've, you know, constructed the model in a way, and their their prompts and their layers ahead of that will stop anyone trying to get access to that sense of information.

Speaker 2:

But they can still ask questions broadly on it, for instance. But what that means is if someone is able to get that model, they can then start reverse engineering it and actually trying to see, oh, wow. Like, how was this model actually trained and actually start pulling out the data that was used to be trained with rather than, yeah, just trying to get to that source data directly. So it's not something that you think about it sort of going at a model from the, you know, output back into what it was actually trained from. So interesting interesting perception on on how you actually solve that because you can pull your layers that you want in front of that.

Speaker 2:

But if someone can, yeah, reverse engineer it, then they can get that data themselves, which is, yep, interesting use case. You may think that your your you know, again, I think it even comes more into a side of that. You've got databases that you've created, and it's cost you a lot of money to create those databases, and you create a model based on those databases. And, again, you may think that, okay, well, the model is is something that people can use, but, you know, the data is really what makes you know, what gives us the ability to produce this model. And we can, you know, have our business focused around that maybe.

Speaker 2:

But as long as they're able to reverse engineer, they can get that data themselves and create their own models based on it, which is kind of, you know, stealing your data in a weird way.

Speaker 1:

I think I heard people who use Slack had some issues where I I've heard in the news where Slack was, leveraging ChatGPT, but then you have access to a lot of conversations that are private Mhmm. And sensitive. I don't know if you heard anything about that.

Speaker 2:

I think it's a common problem. So, yeah, it that just comes into the side of things that when you use an AI service, you gotta make sure that they're not using that data themselves for their own model or or storing it all somewhere. It should just answer the question, and then they shouldn't hold on to that data or use it from that point onwards. And I think that those were, like, the early days of a lot of companies falling into that issue where they would just go to chat GPT and and paste all their source code, proprietary source code into yours and ask questions, not realizing that that source code is now being uploaded into OpenAI. So, yeah, that's huge thing that I think, especially around privacy, even sensitive information again.

Speaker 2:

So, like, hospital records that you think may, you know, you know, you think you may have scraped enough information out of to make it anonymous. But really, if someone spends enough time, they can probably pull it out and infer, you know, what hospital this data came from or something like that. So a lot of pruency around that concerns that you need to look into. Kind of goes into to bias too, same topic, and I think, Brend, maybe you're going to dive into that. But

Speaker 3:

Yeah. I guess that's our that's our third kind of gotcha

Speaker 2:

of this

Speaker 3:

is is model accuracy and bias. And

Speaker 2:

Yep. Mhmm.

Speaker 3:

Yeah. Just to Just to unpack those as well quickly is what is model accuracy? And, that really refers to how well an AI model performs its intended task. So higher the accuracy, it really means that the model's predictions or kind of classifications are correct most of the time. And then I'm sure we're aware of of kind of bias.

Speaker 3:

But I think the the bias in in models occurs when the AI model kind of makes these systematic errors that favor certain groups or outcomes to sort of portrays bias. And that can be from a number of reasons that we'll look at. But I think, yeah, just looking at things like bias detection, looking at your data quality, and again, monitoring. If we sort of look at a scenario where imagine we had a recruitment company and that recruitment company thinks, this this AI model is going to help us. It's going to remove the the bias, from the the sort of human interaction in the recruitment process.

Speaker 3:

So it can look at your resumes, and it can say, oh, okay. We're not going to project any of our own kind of bias onto that. We're going to have a very neutral, you know, hiring process and streamlined one, and, you know, that you can even go and and advertise. Well, we've we've created this whole kind of fair and efficient evaluation process, and, you know, that it's going to be free of human biases. But Yep.

Speaker 3:

You know, scenarios like that, it's at first, everything could seem great. It's hiring. It's recommending really great candidates. But without monitoring, a few months down the line, you realize, wait a minute. There's a bit of a disturbing pattern emerging that it's only kind of it's very few women, and it's very few minorities that are actually being listed as candidates.

Speaker 3:

And suddenly you've got a massive potential PR hazard on your hands. And if you look at what can go wrong, it is really if you're giving a model if you're either training a model or giving a model reference data that is from historical data in your company, they're going to be human biased Existing. In that, or it's going to be human biased yeah, in all that data. So you can't help that you may have had international team of recruiters, but they each had their own bias. And maybe that groups together against a bias against women or minorities, and and that is picked up.

Speaker 3:

So that sort of training data or reference material is skewed from the the get go. So you're really just seeing it sort of manifesting these these issues from the original data. Yeah. So it's really important, I think, to ensure that if you're you're using training data, that you've reviewed it for these sort of you know, the existence of of kind of known biases and, you know, make sure that there's not an unnoticed bias that's going there that will eventually come out because it's really only as good as as the sort of data that you're feeding it. So if you are, you know, giving it giving a model, you know, data and make sure that maybe it's broader, it's maybe even including, recruitment information outside of your company, just general trains to try and offset that that bias.

Speaker 1:

Yeah. I was gonna say I I read, when Elon Musk was training their their cars, you know, it's all on millions of hours of, video data, and that's how they're able to make a a pretty big jump in one of their last self driving programs that they released. And what was interesting they said is that it'll it'll teach the car to avoid objects and do that, but there are certain habits that it picked up from humans being poor drivers Yeah. That it would make mistakes at certain like, when it comes to yellow, it speeds up or or I'm not sure if that was an actual thing, but Yeah. You know, certain bad habits that commonly are done by humans.

Speaker 1:

Yeah. Was training the car to do the same things. So is. That's it.

Speaker 3:

You just see hit a hit a traffic jam, and the car just tweeze into the emergency lane. It's like, what? What's it doing? I am getting you there quicker.

Speaker 1:

It starts honking and yelling at the person in front of you and has a middle finger that sticks

Speaker 3:

out. Flashes the lights and stop it. Stop it. Yes. Don't do that.

Speaker 2:

Oh,

Speaker 3:

no. So, yeah, I I think the key takeaways on on this section is really data diversity is, you know, make sure that it's your data comes from a lot of different sources and is representative of the broader population to prevent any bias. You've got to monitor and audit, make sure that you've got a handle on what it's doing and try and pick up on patterns before they, before they run and make are used, should I say, for decisions over a series of weeks or months. And I think transparency is big. And this is a point that I know we've tried to make very clear in a lot of the solutions we're developing, is let the user know this is an AI based decision.

Speaker 3:

AI is not perfect yet. It's helping, amazingly, but you know, be transparent about what it's doing and the process it's following so everyone is aware of actually what's happening there. And I think the last is inclusive design as well. Just make sure that there's not a single person that is, you know, is making the decision on that data because we all hold our own own bias. And you don't want to think, no.

Speaker 3:

I I think this is this is perfectly good that, you know, it the stage is skewed, and it happens to align with that person's bias, and, you know, that goes out. So I think involving a diverse team, getting a lot of eyes on it, and making that call as to is this, you know, representative of, you know, of as unbiased data as possible. And that's that's everything I have for that. But

Speaker 2:

what I mean if

Speaker 3:

anyone doesn't need to add.

Speaker 1:

No. I was wondering. That's a, you know, that's a interesting segue because there are some topics around ethics and considerations, especially around transparency, bias, and things like that. So I don't know if that's a good segue to go into that. Gotcha.

Speaker 3:

I think so. Yeah. You wanna take that away and cut it. And we can leap back to the last one after that.

Speaker 1:

I I could definitely talk around that. You know, with the with the ethics, and you're just talking about transparency. Obviously, there's governing bodies that are trying to be set up. We have the was it g t the GDPR, but that's more around data. I think there's definitely more governing bodies that have to be developed or there are start there are starts of some.

Speaker 1:

I think the IEEE has some ethic guidelines around AI, but, you know, the transparency of where that data is coming from is so important. Mhmm. Where the source is coming from, because I know that's a big that's a big issue where I saw I know Google results, those kinda show or sort the size the the sources. They'll cite those sources. Same with chat g p t.

Speaker 1:

Yeah. On certain things, they'll cite the sources as well. I don't know if you've used Po. That was the one from Quora. So Quora is a big website where a lot of people put their input in there, and I actually started to do a lot of that a few a while ago.

Speaker 1:

And, when I heard they're doing the they're going to use PoE as AI to pull all the information, but right away, there's a big uproar about citing the sources because they weren't citing sources of where it was getting the answers from. It was just say, this is what we've gathered. And so that was pretty unfair for the amount of, you know, you know, your some people are obviously doing it for the attention. Obviously, it helps put you as as the, expert in that in that question or field or whatever it is. And so there's definitely some ethical challenges around that, if they're not citing the right source where it's coming from.

Speaker 1:

And maybe there's something more where could be developed as far as a covenants level. Maybe there's only one one source or one thing, and it can somehow say, you know, this is a data point of 1. Be careful or whatever. I'm not sure what your guys' thoughts are on that.

Speaker 2:

Yeah. I think

Speaker 3:

I think it does need to sourcing is important, and it's you hope that that in all the algorithms that are built into these models, there would be something saying that if you found it from a single source, then it's not necessarily something you should you should quote. You know, if you have 10 sources that all say the same thing, they're all different reputable ones. And I think Joe and I have had a discussion on really the future of AI driven data and just how we almost need to have recognized sources of information that are trusted. So, you know, at the moment, you you kind of try and apply critical thinking to anything on the Internet. You say, well, you know, what is the source of this?

Speaker 3:

Can the source be trusted? But we almost need resources that are so almost you think of a Wikipedia where someone has to go and approve potentially that article, and you have eyes on that data. So there are a number of coming back to that data diversity as well, that point that we just mentioned, if we could develop reputable sources of information where everything is fact checked, then facts should be pulled from those and said, right, this is verified, or this is completely unverified. Take away that pinch of salt. I don't know how much of that is happening at the moment, and I don't know kind of what we're talking about in recent podcast with the emerging behavior.

Speaker 3:

You do wonder how much has actually been thought out, and we're not just riding on this wave of, wow, look what this can do. Let's let's run with it. You know, you hope that there are there are, you know, the proper decisions. And I think that's maybe where these these government legislations to slow things down a little bit is coming from, and the fact that, let's get all of this right the first time. Let's not run too far with models that we haven't verified, you know, how information is coming out of them and, you know, where that information is coming.

Speaker 2:

Yeah. I'll I'll add in a different perception on that too, though, in a way. I'm not advocating it, just thinking of it differently. Again, one of the amazing things about the models at the moment is that, you know, the more data that you feed them, almost like the smarter they are, and that emergent behavior comes out. And I think, you know, just the power of just being able to add more and more and more data and just add everything that you can think of, and it just, you know, becomes an amazing model Has been probably one of the biggest things that sort of, you know, kicked AI into high gear recently.

Speaker 2:

And I'm kind of, like, concerned or just sort of interested to see, you know, I wouldn't want it to be the case whereas, you know, government rulings come out and and restrictions, and you must, you know, declare all your sources of data, and, you know, you gotta make sure that everything is sanitized and unbiased and ethical that it gets so constrictive that, you know, it takes a huge amount of data we'd be able to use before to, like, a very tiny set. And models that come out in the future are almost, like, dumbed down in a way because of that. Don't know what your guys' thoughts on that, but that would be a concern of mine that as things tighten up, it just sort of negatively impacts the models overall.

Speaker 3:

Well, that's that's stifling innovation. But I think what we'll find is legislation is is is obviously country specific. So, you know, there's so many different places that can be set up that you know, where there's there's less legislation. And so I don't think we'll ever stop it, but maybe it's a case of, you know, larger disclaimers and more transparency again, coming back to, we don't really know where all these sources have come from, but this is the best bet. Fact check it.

Speaker 3:

And, until we've got a source that that can so almost, I guess, I'm thinking of it like feature development. This is a new feature in the models. You know, we can verify that this has been fact checked on 3, 4, 5 plus, independent sites, you know, to give you a well rounded, unbiased opinion, which maybe doesn't happen at the moment. And as you were saying, like, the the fact that you can manipulate sources on the Internet in a large enough scale to skew the data on something, I think maybe that's that's an approach it it will go because I don't see I don't see the momentum stopping. You know, just looking at the news today with NVIDIA just expanding out its its production facilities and its sort of AI processing facilities, it's incredible.

Speaker 3:

And they're massive. You know, manufacturing and and sort of AI farms that are kind of being created. So it's Yeah. Yeah. I think it is full steam ahead.

Speaker 3:

Yeah.

Speaker 1:

Yeah. I mean, you you mentioned about the the country. So the the EU is coming out with a it's called the e EU AI Act. It's basically it's a piece of legislation aimed at ensuring AI is safe and beneficial. It employs a risk based approach categorizing AI applications into unacceptable high and low to minimal risk categories.

Speaker 1:

So they're doing something there, and there's an AI liability directive where people are compensated if you get hurt or injured by AI. UK is doing something as well. So there are countries that are trying to do something as from a global standpoint, but, obviously, concerns with Russia, China, North Korea, you know, those where those ethics aren't you know, probably is a it could be a problem. So that that would be interesting on how they follow those standards.

Speaker 3:

Last thing that I'd look at is, you know, what are the what are the sort of long term impacts of kind of AI on society as well? That's a big ethical consideration.

Speaker 2:

That's a big topic.

Speaker 3:

It is. It is. But I mean, something we're talking about ethics, but it is just, you know, how how is that handled and how many people are and, you know, hopefully, this is where the legislation comes in as well that can slow the impact because I feel that a lot of people I mean, it was recently, I think there was a big Microsoft job cut, and the whole AI ethics committee was was just fired. And, you know, you do wonder, is it at a stage yet you know, should there be a slower sort of transition? Yes.

Speaker 3:

You can make a lot of efficiency savings now and things. But, you know, should there be a slower transition off instead of, like, let's drop everything

Speaker 2:

Yeah.

Speaker 3:

In favor of, you know, we think you can do everything now. Great. Let's go. And rather, you know, slow that transition and and figure out how displaced workers are really going to be reassigned and and, you know, where they can where they can still add value in an organization.

Speaker 2:

Yeah. It's it's a gold mine right now for AI. And so I think all companies don't want anything to slow them down because the competition is so high, and they all wanna be have the best model. I mean, like, OpenAI is doing so well right now that Microsoft has completely, you know, taken on their models and Apple. I think we'll talk about that more, but that's also using Chat GPT just because it's the best models and the amount of money going into that is is huge.

Speaker 2:

And so having good models with the most, you know, quickest best responses is just such a massive competitive advantage of everyone else that I think right now everyone's just going all out, and they want absolutely nothing to slow them down. And there's, yeah, as you said, there's huge concern and calls for, you know, these larger corporates to slow down and just take things a step at a time and make sure that we're making the right decisions with a lot of this stuff. That is happening so fast. I think even government legislation, which notoriously always lags behind, is, I think, way behind in this one. So, yeah, it's gonna be interesting to

Speaker 1:

see all

Speaker 2:

of those.

Speaker 1:

I think I've seen arguments online. Obviously, you've got x Korean, the Grunk, XAI Yeah. With, obviously, Chat GPT. And there's been some, you know, there's been some chatter back and forth between Musk and, you know, with with him talking about how OpenAI has some biases with its responses. I think he even said one where it actually asked if you were to say the n word, but you were able to save the human race from being destroyed by a nuclear bomb or or something like that, would you say it?

Speaker 1:

And the the AI said no. It's like, you know, he's like Yeah. You know, these decisions, this AI is making based on some kind of bias, political bias seems a little bit crazy. So he's like, you have to watch what kind of answers are being spit out because, of course, I heard it on I heard it on the Internet. I read it on the Internet.

Speaker 1:

It's what people say, and and they said, well, I heard AI say it, so it's gotta be true. That's gonna be the next thing.

Speaker 3:

Yeah. I'm not sure whether we've we've covered this. I always find it a fascinating one. I think it it relates more to AI general intelligence, but it does still talk to the complexity of of the human, you know, the the sort of biological algorithms that we follow in favoring certain things and and as living as a society, you know, how we we live together and there's certain unspoken rules, just from an evolutionary standpoint that are hardwired. And I think I always liked the example that I forget who it was, but they're saying if you're you're programming an AI to, make tea for you and you say, please go get me a cup of tea, and, you know, it goes over and it comes back and it's wheeling you the tea, and there's a table between you and the, you know, and the robot, and, you know, that's got a vase on it.

Speaker 3:

It just goes straight into the table and breaks the vase. So you're like, why did you do that? Don't break vases over anything else. Don't break vases. So it's going along and, you know, it's coming back and it's reprogrammed, but it sees the cat jumping on the table.

Speaker 3:

So it goes and throws the cat out the window because it's intentionally going to break the break the vases. Break the vases. So if you don't there's so many micro decisions that that we've we've built up, you know, ourselves, you know, as part of sort of evolution and as part of living in society that you have to yeah. We have to capture those somehow because you can't expect anything. Again, this is going to more general intelligence, but even looking at these models, you can say, you know, through things like prompt engineering, these are the things that you shouldn't do.

Speaker 3:

But there's always something that you probably missed out because there are such a huge set of of ready algorithms to to allow us to function in the space that we do at the moment. So it's a fascinating field.

Speaker 1:

It it brings me back into one of the because I I was I did a master's in robotics, and we're we're thinking of how to program a car. It was a self driving car way back when, so this is, you know, 20 some years ago dating myself. But, you know, one of the things that came up is if you're in a car and then you've got, you know, a child and a mother walking on one side of the street and, you know, 3 other older people walking on the other side of the street Mhmm. And there's a car coming straight at you and you have to swerve and you gotta hit somebody or you gotta kill the driver, which one do you choose? How does it choose?

Speaker 1:

You know, you as a human, if you choose either one and you hit somebody, that's called an accident. You're not gonna be charged with murder. But if an AI makes that decision, it's not it's almost like they're saying it's not an accident then. It's purposely made that decision, which is kind of unfair.

Speaker 2:

Who did that PR commit to? And then we'll end this, take him, yeah, to court.

Speaker 1:

For sure. And so that's what makes these self driving cars, you know, we just living in different cities, you know, there's so many bad drivers making so many bad decisions daily. And I have a 16 year old that's learning to drive and just being on the road, you're noticing how many people make bad decisions constantly. So to suppress self driving cars, which can probably, at at some point very quickly, be a better driver and prevent a lot of the accidents, decrease insurance, medical, and do a lot of positive things. Yep.

Speaker 1:

But people will say, oh, let's sue it because let's prevent these companies from creating something that could actually benefit us because of that initial problems that you're you're gonna run into.

Speaker 3:

That's a good point.

Speaker 1:

Last last topic, the integration challenges.

Speaker 2:

So, yeah, the last point then, integration challenges. Two ones I want to go through are data quality and preparation. We kind of talked a bit about this before, but right now, if you want to train your model or use it, you know, create indexes where you can pull data from and pass it through to model for generative answers, That data is quite it's it makes a big difference to get that data into a structure that the AI can easily understand that your indexes can easily pass through. And so I think if you're trying to integrate AI into a company, you have a whole load of data that you wanna start using via training or or via creating indexes around that with RAG. It is important to understand that there are formats to it.

Speaker 2:

You wanna get into, you know, a really structured state that works well with AI. You know, it's it's a normalization process that you have to go through, and I think there's a lot of misconception that AI can just take anything to it. You can just throw any data. That's definitely not the case. I think it needs to be in in a in a format that works well with it.

Speaker 2:

And we've gone through exercises recently around that, but I'm sure you can talk about that more where we once we get documents in a certain format. And we found markdown work worked really well, especially for front end tools that then display that markdown. It'll just flow through really well. So, yeah, it's a good consideration to to think about. You do have to actually pay attention to that.

Speaker 2:

I don't know if if you guys wanna add on anything to that one.

Speaker 3:

Well, yeah. No. I mean, I was gonna look at the I don't know the other point you had, but I was gonna chat a bit about the change management. But, Joe, maybe you're getting to to that one.

Speaker 2:

Go ahead. Yeah. I mean, I think it's it's pretty straightforward, but from my side at least, but if you have, yeah, good insights.

Speaker 3:

Yeah. Integrating, yeah, into I think what we found is there is a there's really a fundamental shift in how you need to think about AI technology and the potentials of it. So when we've looked at people who haven't been following AI, they're not aware of the technologies. You're potentially integrating a tool that is really powerful and can do a lot more than a lot of people realize possible if they haven't worked with AI. So, you know, part of the integration challenge is really understanding, you know, who your who inside your company needs to work with this technology and needs to be guided on on how it functions to be able to unlock that potential.

Speaker 3:

So if if you have subject matter experts or you have, anyone in specific departments, do they know what it's possible of, and have you made them aware of what it can do so that you can unlock the full potential of that? Because often, you know, someone may attribute it to, you know, an old, you know, you know, an old experience with a chatbot where it's very sort of static. And, but I think, you know, making people aware, showing them the potential of it in an organization will obviously unlock the the maximum potential of what AI can really offer. And I think that's only done through proper change management and education on AI.

Speaker 2:

Especially since AI is such a new way of interacting with the system or with data, it's something that, you know, no one's really used to. Like, you can still see even now when someone tries to chat to something like ChatTBT. If you've used it a lot, you can you can start making your questions pretty specific or brief or, you know, almost how to prompt it in a way, which keywords to use to get the best out of it. Whereas if you're a new user, you may not even really know how to approach the question. Like, you have a you have a large question that you want answered, and I've seen a few people write out, like, paragraphs of text that can explain in detail exactly what they wanted to do because they think that they have to explain it to that level, not realizing that, you know, it's really good at just inferring your overall objective from, like, a very short description of your question.

Speaker 2:

And so I think a lot of training just how to effectively use AI is something that that corporations and companies and individuals and all of us really need to think about more how we can get that across. It's almost like understanding, prompting in a way, is a huge huge efficiency boost. And you can give 2 people AI. 1 person's way more efficient with it than the other person simply because they just understand better better interactions with it, how to ask it, how to prompt it. Absolutely.

Speaker 2:

Yeah. And to to sort of follow-up questions and to sort of guide it in a certain direction. I'm sure you've you've realized yourself too as you're asking a question and you can see the response isn't coming back kind of the direction that you want to go. When you have enough experience, you can start steering the conversation. No.

Speaker 2:

I can see you drifting off to this other technology that maybe I'm not interested in. It's kind of nudge it back on course, and you can get back on course and and get your answer, but a user who's may not not not so much aware of that process may see that first answering or no. It it doesn't know what I'm asking and sort of give up. Not realizing that you kind of have to work with it and sort of massage it sometimes to to get there.

Speaker 1:

Yeah. It's interesting with that change management because our organization, we've done, change management digital change management within, or digital transformation is another terminology for that where we've updated software applications within organizations, integrated better processes to actually create and manage those software products. So I'm sure, you know, we've seen resistance to that change just within the the the people that are in within the organization. Just you're changing the way they do things, approach things. They do they they create bad habits that they've done for years.

Speaker 1:

So now with AI, I'm sure I'm just curious. And I probably have asked you this question before when you're integrating with, one of our clients is is that have you seen any type of hesitation when they're like, oh my god. This is gonna replace me or whatever, that kind of viewpoint. Have you have you seen or heard that at all?

Speaker 3:

No. In my opinion, it's been maybe in some ways the inverse of that because it's it's maybe people that haven't been exposed to the technology. So they're not aware of the potential. And they're still thinking, well, can this can this really do what you're saying? Is this is this possible?

Speaker 3:

Like, do you think it's you know, in some ways, I think until you show someone what it can do, you know, they may be they may have some preconceived ideas. So it's I I wouldn't say we've seen that directly. But, you know, as we ramp up solutions, you know, maybe that that will change as well, when, you know, people start seeing sort of, you know, the results of some of the the new projects. Interesting one, though.

Speaker 1:

And, I mean, just to touch on some of the some couple other points that about this this thing is that that maintenance and scalability. Just from my my point of view, you know, obviously, when you're when we integrate software, you have to keep up with it. And just for a very, you know, general term, you put WordPress onto your website and you don't upgrade the versions. Your website starts to break. That template that you bought doesn't work anymore.

Speaker 1:

Things start to go awry. And so thinking of people who integrate AI into their their company, you know, is it is it really they have to keep paying for these software updates that are happening. They they have to have a good maintenance plan to say you've got to upgrade your systems, hardware, and software. You know, that I'm curious about that scalability if you have any insight about that because if they don't keep up with certain things or keep that access to OpenAI or, you know, Copilot, who whatever they're working on, or their AI agents aren't being upgraded. Does that start to break things because now you have dependencies upon things that work with different versions?

Speaker 2:

Yeah. At least from my side, Brad, I'm sure you'll set some points, but AI is moving so fast at the moment. I think it would be a mistake for anyone to even make the decision that we're happy with how things are working. Let's not change anything even if there were even if there weren't forced into change from just general updates just because it's moving so fast, improving week on week, month on month, better systems, cheaper systems, cheaper models are are coming out. So I think it's really important to be super flexible around that, and that will come into main maintenance, to be able to say, okay.

Speaker 2:

Great. You know, this new model's come out that's improved things a lot. Let's move over to that. I think any implementation at the moment needs to be flexible without question, at least, from my perception. Not too sure, Ben, if you have thoughts on

Speaker 3:

that one. A 100% the the same point. Integrate with flexibility. Don't don't put your money on on one, one technology at the moment or go all out with one. Make sure you're flexible.

Speaker 3:

Make sure you can pivot because tomorrow could be a different model that that offers so much more, and you wanna be able to switch over to that. So

Speaker 2:

Yeah. And I agree with that point. For for what that really means when we're saying be flexible, it it just means create your tooling in a way that allows you to pivot, and you're not reliant on, let's say, 3rd parties to do that pivoting for you. So you're not buying into a boxed software package, and you're entirely reliant on on what they do to stay competitive or to get the latest updates, you want to and I I almost recommend try and, have a very flexible system that's broken up into multiple, services that you can then individually, you know, swap out as as the technology changes and improves. If a new model comes out, you can swap that out.

Speaker 2:

If a better, chatbot interface comes out, you can swap that out separately, and so you can sort of move the pieces around as as the industry progresses. And as even our interactions with AI improve, right, so right now, you know, a lot of AI is chatbot based, but as we're seeing, you know, speech is becoming huge. And within, you know, a year or 2, wouldn't be surprised to me if if the most interfaces are speech based, and now you've bought into a software package as a completely text based. You can't change that. And all your competitors are moving towards speech, and everyone's loving it because you could just talk to your computer now without tapping up, like, a paragraph and see why I'd be able to just, yeah, make sure that you're choosing systems and technologies and suppliers and and vendors who who can, yeah, ensure that you you can pivot to any of those new tech as they come out, really easily.

Speaker 1:

Great. I think that's our our 5 gotchas for today. I think this is a good good chance to wrap it up. I wanna thank everybody for tuning in again to another AI podcast episode of, Make an Impact user and experience. And remember to like and subscribe to our channel, and looking forward to hearing you in the comments.

Speaker 1:

Take care.

Speaker 3:

Thanks. Thanks.

Speaker 2:

See you.