Make an IIIMPACT - The User Inexperience Podcast

Summary

In this podcast episode, the team from Impact discusses the integration of AI into software applications. They address common misconceptions and myths surrounding AI, such as data concerns and the complexity and cost of implementation. They emphasize that AI is not a silver bullet and that customization is possible depending on the specific use case. They also highlight the importance of understanding the different facets of AI and the tools available for integration. Overall, they provide insights into the current landscape of AI and its potential for businesses. In this conversation, the hosts discuss various myths and facts about AI and NLP. They address topics such as the expertise needed for AI implementation, the accuracy of NLP, the integration of AI into different applications, the trustworthiness of AI, the ROI of AI implementation, the potential job displacement, and the diversity of NLP tools.
Keywords

AI integration, misconceptions, data concerns, customization, complexity, cost, AI, NLP, expertise, accuracy, integration, trust, ROI, job displacement, NLP tools

Takeaways

  • AI is not a silver bullet and requires careful consideration and customization for each use case
  • Data concerns can be addressed through secure setups and options that allow for data to remain within the organization
  • The complexity and cost of AI integration have been reduced with the availability of tools and cloud infrastructure
  • Understanding the different facets of AI and the specific use case is crucial for successful implementation AI implementation requires expertise and understanding of the specific problem and available tools.
  • The accuracy of NLP depends on the specific use case and the quality of the data and prompting.
  • AI can be integrated into various applications and does not necessarily require separate instances for each.
  • Trust in AI technology should be based on its purpose and limitations, and human verification is still important.
  • AI implementation can provide clear ROI by improving efficiency and streamlining processes.
  • While AI may impact certain jobs, it also creates new career opportunities and promotes efficiency.
  • NLP tools are diverse and vary in their capabilities and applications.
Titles
  • Addressing Data Concerns in AI Integration
  • The Cost of AI Integration: Debunking Excessive Claims The Impact of AI on Jobs
  • The Diversity of NLP Tools
Sound Bites
  • "AI is not a silver bullet."
  • "The tools have become so much better, it's so much easier."
  • "The cost of setting up is not excessive if you're using the right tools."
  • "I'm a prompt engineer, give me 500,000 a year or some crazy thing"
  • "Not hiring the wrong person who thinks they're an AI expert on their resume, but they really aren't"
  • "If your prompting is correct, if it's watertight, you can lock down that accuracy to a high degree"
Chapters

00:00
Introduction to Impact and the Topic of AI Integration
06:32
Misconceptions and Realities of AI Integration
34:45
Expertise and Accuracy in AI Implementation
48:43
Trust and ROI in AI Implementation
59:12
The Impact of AI on Jobs

What is Make an IIIMPACT - The User Inexperience Podcast?

IIIMPACT is a Product UX Design and Development Strategy Consulting Agency.

We emphasize strategic planning, intuitive UX design, and better collaboration between business, design to development. By integrating best practices with our clients, we not only speed up market entry but also enhance the overall quality of software products. We help our clients launch better products, faster.

We explore topics about product, strategy, design and development. Hear stories and learnings on how our experienced team has helped launch 100s of software products in almost every industry vertical.

Speaker 1:

Hello, everybody. Welcome to, Impact's very first podcast, episode 1. You know, I wanna introduce really quick our purpose and and what we do and and and the topic, of course, for today. But, to give a brief intro on on Impact, we are a product strategy, a UX design and development agency. We've been around for about 20 years and currently headquarters in Austin, Texas.

Speaker 1:

We have a team of about 25 people at the moment. We help with, companies that are looking to upgrade and really improve their their software for their applications, the user experience to product strategy to actually execution. And we've been around, all different industries. You name it, we've probably been in it. And wanna introduce 2 of my leads, Brinley Evans.

Speaker 1:

He's been with us for he you've been with us for about, what, 20 years. And It's been

Speaker 2:

a long road. It's been a good road. Like, wow. We've we've seen some things, and I think that's why that's why we we kicked this off. I mean, there's stories to tell.

Speaker 2:

There's knowledge to share. So I'm keen to keen to be a part of it. And Yeah. So really but you know, I was gonna I was gonna take the lead on this one, but, yeah, I'm looking looking at what I'm doing now, with impact, you know, building up all that experience over the last 20 years, and now looking at more kind of a a UX analyst role with with our clients and, you know, really helping them to understand opportunities and realize those opportunities right through from ideation through design to development. So it's exciting times, and I think what we're gonna be unpacking today is is particularly exciting.

Speaker 2:

That's enough about me.

Speaker 1:

Well, I wanna I wanna say one thing, though. Remember our our very first major project was with, small little company called, Mystery Marble Oil, who's owned by Turtle Wax. What we had to do was build a virtual garage in Flash.

Speaker 2:

I do. That was quite something.

Speaker 1:

It was a bit of

Speaker 2:

art deco, architecture thrown in.

Speaker 3:

Oh, yeah. It's great too.

Speaker 2:

Modeling. Wow. Okay. And and the user experience on top of that. Oh, yeah.

Speaker 2:

Awesome.

Speaker 1:

That was great.

Speaker 2:

I can remember those Flash Rolodex that we did. That was another thing that's, like, bringing something retro into Flash, which was cutting edge at that time, and which is commonly buried. That's

Speaker 3:

so fine. Yeah. To say your flash was cutting edge. Yeah. That's, a little flash from the past.

Speaker 2:

Point. Yeah.

Speaker 3:

Yeah. Yeah.

Speaker 1:

Cool. And I'll I'll introduce our our second second person here is Joe Craft. You've been with us for, I think, about, what, four and a half years? What's going on?

Speaker 3:

Yeah. Yeah. Pretty much. You are you are

Speaker 1:

our solutions architect with dotnetcsharpengineer, and, obviously, also with one of our longest running clients, and you're assimilating the Borg, I mean, AI into

Speaker 3:

Try to tame it a bit. Yeah. Yeah. It's interesting. That's that's

Speaker 2:

your modesty, Joe. You know, Joe's already a 100 a 110¢ on top of it.

Speaker 3:

Right. Yeah. Yeah. For sure. Great.

Speaker 1:

So Good to hear. With that, I just wanna welcome everybody to the show. And then, you know, one of our, you know, the hot topics obviously for discussion today is obviously AI. And just, you know, whether businesses are considering integrating AI into their software applications. You know, there's some hesitations and frictions that are involved.

Speaker 1:

You know, this is a a hot topic, and so it's something that we wanna kick off with our 1st podcast, that topic. So I don't know what that

Speaker 2:

And I think if we if, yeah, if we kinda think back, I mean, what it was a year ago, just over a year ago, chat gpt came out and I think just blew everyone's mind. I mean, I can remember the first time I thought, well, let's throw some things at it. Let's let's ask it to write a poem and and just being really blown away and just saying this is incredibly game changing technology. And I think we all had those our own kind of experiences with that way, you know, you just realize the the power that, you know, that it offers and the potential for business. And I think that's that's what's kind of excited.

Speaker 2:

That's all. And, got us, you know, into into implementing it and and, you know, see what it can do for for our clients. So I guess it's with that that we we sort of look at, well, what are the what are the myths that, you know, our listeners could be thinking about and, you know, could be sitting with?

Speaker 3:

What are

Speaker 2:

those, what's false, what's true, that sort of thing?

Speaker 3:

Exactly. Also, just the general perception of AI as it's changed over the years. It's just changed so much from 5 years ago to now and to what's actually happening with it as a term. It's morphed into something completely different. What used to be a very specialized sort of academic, you know, or very specialized science has now become mainstream technology.

Speaker 3:

And now every business is trying to jump on almost like a bandwagon around it and then blanketing everything into AI. So, yeah, it'd be interesting to sort of go through that timeline to individually sort of see how we've all seen it evolve, and what it means now. And and, yeah, just the the total sort of misconceptions around what it can do. And another kind of question, which is a a big question because it's actually coming up quite a bit, but it's a whole topic in itself is, you know, would we even be in an AI bubble right now in terms of how people are using AI?

Speaker 1:

I think it

Speaker 3:

has very specific good use cases, but it's just being used for everything right now. So it's kind of getting a term that that's being called a bubble and, you know, will it burst? But that's kind of more of like a tech industry perception or concern around it. But, yeah, unpacking that I think is is important to what that actually means.

Speaker 2:

Yeah. I think,

Speaker 1:

you know, maybe just to kick it off with the with our topics, I think, you know, one of the one of probably the misconceptions is it's it's a silver bullet for everybody. You hear everybody just talk about bringing, oh, let's just throw AI into it. Let's just do this. Obviously, with both of you helping one of our clients actually involve with this, it's not a silver bullet. It's not something you can just start entering in.

Speaker 1:

And, you know, obviously, if you want a simple chatbot, maybe, but that's not what we're seeing. I don't know if you guys can talk a little bit more about that.

Speaker 2:

It's interesting because I think AI is a group, you know, and, there's so many facets to it. And, you know, everyone said, well, AI, they're almost talking about it as a as a single NGO, a single product, or a single solution when it's it's multifaceted, multi disciplined. You know what? When someone references AI, what are they talking about? Are they talking about an NLP, which is a natural language processor, or, you know, are they referring to neural networks?

Speaker 2:

So, you know, what is it? And I think that's there's a sort of blanket AI term that that has just, you know, come across and, you know, is that a silver bullet? Well, it depends what you need to do with it. Is it an amazing tool to use in a a number of different scenarios? Can you pick the right tech stack and make the right selections, absolutely.

Speaker 2:

I don't know. What do you you guys think?

Speaker 3:

Yeah. I mean, you can use an analogy that AI has become kind of like a workshop. You can think of it that just has loads of tools inside it. Tools for every sort of purpose. Some work really well for very specific scenarios.

Speaker 3:

So if he wants, like, hammer nails into a wall, and you go up to a guy who says, alright, I'm the AI workshop, and it's like, cool. Can you have this nail into a wall? It's like, well, you know, what exactly are your nails? Like, what tool can we use for that? Okay.

Speaker 3:

We got a hammer, and that's the perfect tool for this. But if you just go without understanding of all the tools available to you, and you're just looking at the AI workshop, it can be very disorientating and very confusing to go, okay. Which is the right tool of it, because all very specific, they have very specific niche uses. Some are really good at certain things. And you have to still be able to figure that out and navigate that landscape of what's available.

Speaker 3:

And I think from from someone just sort of looking at a product that says it's using AI, again, that's like saying a mechanic has a workshop. Okay? But, like, you know, why are you actually doing that workshop? Why is he building? Like, what tools do they use to, like, you know, actually build their products?

Speaker 3:

And that that is always vague. It's always ambiguous. You can go to any website that says they use AI. And if you try and look into it and go, okay, well, what AI are they actually using? Like, you know, they say that they can do all these maybe predictions and mock analysis.

Speaker 3:

But that's what they'll say. And we're using AI and it's sort of pulled into this buzzword. And and as a consumer of that, you're just totally unaware of, okay, well, you know, is this even like the right tool for my purposes will actually fit my use case? And I think the big challenge is figuring out, okay, well, how can we make those tools more clear what they do, especially the tools that we're getting really good experience in? How do we navigate outside of that bubble of a workshop of AI and say, okay, we're actually really good at these tools and we have good experience in that.

Speaker 3:

And I think that for any sort of company dealing with AI needs to sort of figure out how to how to get that get that out there, outside of and so, like, breakout outside of, like, we use AI and actually say, oh, no. We use generative answers for, like, a very specific purpose for, you know, understanding, you know, documentation or something like that. And that's a very niche field, and it works really well with that. But we're using a really good tools for that too. And how do we get that message out?

Speaker 3:

So are you

Speaker 1:

using AI right now for your eye eye tracking there, Joe?

Speaker 3:

It doesn't work. That little niche demo can look so impressive. And it looks like magic. But it just works for that small example that they've shown. And as soon as you try and break out of that and start getting it to do more things or try and manipulate it in a different way, it just falls apart really quickly, like with that eye tracking software I was using.

Speaker 3:

Mhmm. Sounds cool. And if the lighting is perfect in the room, and if I'm positioned really well, and I'll be looking at the camera perfectly all the time, great. But 90% of the time, my, like, eyes are, like, switching around and doing different things. And, yeah, it it's just, again, kind of, like that's the unfortunate thing with industry too is there's so many kind of gimmicky AI solutions to throw down there, and you'll look at a YouTube video and you'll be like, wow, this is amazing.

Speaker 3:

And then once people try it, it doesn't work well. And, again, that's just the perception of AI then sort of taking a downward turn in their perception of, you know, what AI can do for them. They go, you know, I've tried AI. It didn't really work that well for me. It's like, oh, well, that's totally not what we're doing or what this can do, what something else can do.

Speaker 3:

That's just, like, a really niche area of it.

Speaker 2:

Or the question would be, how long ago did you look? Because I know, Joe, what we've seen in, you know, in the last year is a huge forward push in terms of capabilities and release feature releases from a lot of the the majors of software companies. So it's Yep. You could check it on Monday, but by Friday, there are a whole lot of new features, capabilities. It's it's really moving quickly.

Speaker 2:

So when you think of that bubble, well, you know, maybe there is a I almost feels like we're talking about greenwashing, you know, where, you know, 10, 15 years ago, everyone was like, woah. You know, go green and do this. Everyone was was trying it on everything. But what I I feel the difference is the speed at which it's moving. Even look at the the AI video generators, they are making leaps

Speaker 3:

and bounds.

Speaker 1:

I mean,

Speaker 2:

we're we're not far away from, you know, generating full feature film kind of movies based on your your keywords. You know, if that's it wouldn't surprise me in 10 years if we're we were watching Netflix and, you know, requesting the type of type of script that we want and I know. Type of movie.

Speaker 3:

I'll add in there there as a caveat. Like, again, that's like a demo video that we've seen that, you know, OpenAI has released. Like, have you tried it yourself yet? Have you actually used it or experimented with it yet? And it's like, these demos look so impressive always, but, like, I don't know.

Speaker 3:

Every time I try some of these solutions out, it's it's kind of like they they're quite limited. And again, so many of them are really good, though. That's a frustrating thing. That's a kind of a point I'm trying to get across. There's some applications that are there are so good, and it's really good at some specific things.

Speaker 3:

But other things people try and sort of make it work, and they they come up with a cool idea, and they try and get it there. And it works again in that niche situation, which gets a lot of interest, but when it's actually tried to turn into a product. And more people get eyes in it. It kind of falls apart.

Speaker 1:

I think there's Yeah.

Speaker 3:

It's it's more of a perception at the moment. Yeah.

Speaker 1:

Yeah. I think, you know, with some of the points that you touched on, obviously, it's it's almost like the, Internet bubble boom back then where everybody was having a website. If you had a website, you had 1,000,000 of dollars coming at you because you had some kind of e commerce website. So it's almost like that kind of pendulum that has swung this way for AI, and you're gonna see a lot of vaporware, a lot of things that may look like AI or has some type of, you know, algorithm and and they throw AI as, oh, yeah. It's AI.

Speaker 1:

And so, you know, you've seen this in recent news, with Amazon, their their their, stores that are fully, automated. Turns out it was, it's not the case. They have a team, I think, in offshore in India or somewhere where, they're typing away. We had a client actually, look at one of the supposed AI technologies, and it turns out was a a a large person team in in the Philippines, which, was doing the AI for, for that, which was, interesting. And we, we, we, you know, we did the evaluation.

Speaker 1:

We, we thought it was vaporware and, but, they didn't listen to us. So, you know, that's something that a lot of companies are just, you know, they're just jumping on that bandwagon and, and maybe it's part of that, like, that lack of expertise that they just don't have the in house people to understand the difference between what is real AI and

Speaker 3:

what isn't. Well, yeah, I think even with, like, the Amazon thing, I think that's kind of the same kind of point where someone originally developed it, it looked really good. It started working really well for like small scale demos. And normally, when things when you build something small, you build a PRC or a prototype, you can kind of see, okay, if it's working for the small use case, and we scale it out, will it verify the scenarios? And if it doesn't, we just build in the logic to make it work for the other scenarios.

Speaker 3:

The problem with AI is that because it's kind of a large language model, a lot of it is just kind of magic, you don't really know what's going on inside that model to get that answer to you. It might work for a few smaller cases. But when you try and scale it out and try and use it for more purposes, or different environments and different scenarios, it starts showing cracks, and you can't repair those cracks really easily. Because again, it's just kind of a black box. And it could work to some small niche cases, but not for a lot of points.

Speaker 3:

So for something like Amazon, what they probably did is, yeah, it works in their sort of niche use cases, but they realized, well, we've got to fill in these cracks. And okay, this is good stuff for now to fill in those cracks as we try and, you know, build out ourselves and to try and figure out how we can work around this with actually AI. And Now, obviously, eventually, we reached a point where we just can't like we just it's just sort of limping along with stuff, maintaining it to sort of keeping it going. But that that last, like 30% that we needed AI to cover, we just couldn't get there. So yeah, I think you have to whenever you're looking at a solution, just think of it in terms of that, like, if it's if the technology is right now can do something, don't think that you can expand that out for it can do everything, it can probably only do that small thing.

Speaker 3:

And sometimes that small thing is great. That's exactly what you need. And it serves a great purpose, but just keep it to that. You gotta keep yourself kind of contained to what it can do. And again, view it as a single tool doing a single job for that specific use case.

Speaker 3:

And don't think you can just jump over into other use cases very easily because, yeah, that's where you can quite go go wrong quite quickly.

Speaker 2:

As far as I think

Speaker 1:

you know oh, go ahead, Brimley.

Speaker 2:

No. I was just I was just saying, we're we're still sitting on really powerful. Like, we look at NLPs just focus on those. The way they're advancing their capabilities, their understanding, you know, we've seen as we've progressed, even if you look at, you know, one of the models like the chat GPT, you know, 3, 3.5, 4, 4.5 sort of turbo, the capabilities we're seeing improving all the time, it it's really a powerful tool. And I think that's where, you know, looking at that, and and I know a lot of people depends on who you speak to, everyone's got a different view on it and different view whether, you know, they they have a bias towards it, against it.

Speaker 2:

They never wanna look at it. They want a completely integrated into everything. I think that's where, you know, looking at looking at these assumptions is fun. And, you know, I'm keen to dive into that and sort of see, well, you know, what are the the 10 sort of assumptions we've we've mapped out? And, you know, are those are those myths, or can we kind of verify them as as some sort of truth?

Speaker 2:

So I don't know, Makoto, if if you wanna jump in or you have everything else to. I'm just cutting to the chase here, but I'm I'm I'm keen to jump into those and do some

Speaker 3:

Fantastic.

Speaker 2:

Podcasting kinda myth busting.

Speaker 1:

Alright. Let's do it.

Speaker 3:

It's awesome.

Speaker 2:

The first one we had.

Speaker 1:

Yeah. It's, I have data concerns. I don't want my data used to train AI companies. I think that that

Speaker 2:

that's an interesting one because I mean, Joe, we've we've seen sort of, you know, a range of different people that, you know, we've we've interacted with have different perceptions again. And that's from from all the news that that comes out initially on, well, you know, I don't wanna my data to be used because we look at you look at even a lot of the lawsuits that are coming out now in terms of trading the model where these big companies, tech companies are, you know, pretty much scraping everything that's out there and, you know, building their their model. So, you know, you have concerns, well, I don't wanna feed data to it. You know, I don't want data leaving my kind of secure environment and and, you know, going away. And I know that, you know, from what we've locked down, there's been, you know, such a, a movement towards containing everything and, you know, also companies, you know, being completely, you know, from a security perspective, completely compliant with that and also building them into the environments like, you know, like we've seen with Microsoft, for instance, with Azure Open AI, having, you know, it contained in your own development environment.

Speaker 2:

So your data is not leaving anywhere. Yeah. I know you have a lot to add. It's heartbreaking, Joe. But I don't know.

Speaker 2:

I'm too

Speaker 3:

excited, really. I mean, it's it's a technology like any other. Right? You've got your data, and you're giving it to a 3rd party. Right?

Speaker 3:

It's I don't think the the landscape's really changed compared to giving your data for any other purpose to a company to do some analysis on or or data analysis or anything like that. You're handing over your data, and it's all just about the trust that you have with what they're gonna do to that data and agreements that you have. Right now with things like ChatTPT, you know, if you're putting your company data into that, the agreement is with ChatTPT. They are going to use that to train their own models. They're going to ingest that.

Speaker 3:

I mean, you just need to be aware of where your data is going. But, you know, once you're aware of that, you can look at alternatives. If you don't want your data going out and you, get into something like Azure and deploy your own model, within your own Azure space and and just use that, and it doesn't go anywhere. So I think if it's if it's like a if it's the question comes down to, you know, is my data going to be used to train AI companies? Again, the answer is, you know, has has your data been used anywhere else before?

Speaker 3:

Same agreement stand. I think you just gotta be aware of the options out there and and what you want. Yeah. Yeah. How, you know, sensitive your data is, and then there's obviously costs around, you know, you wanna build something yourself or you wanna use a 3rd party, you know, that's cheaper.

Speaker 3:

What makes sense to you? Just gotta work through that. Yeah. Could I ask if you have any ideas on that one either?

Speaker 1:

I'm not as smart as you guys.

Speaker 2:

So that doesn't look like so what I'm saying there, we're not entirely busting that as as I think we can kinda say, look, there are setups that you can use that will be optimized to any level of security you would need.

Speaker 3:

Correct.

Speaker 2:

Or they're ones that, you know, can make your data external and and use for training. So I think at least we can answer them and say, if anyone that is sitting on the fence and thinking, well, you know, I'll never use one of those. It's not the case. You can type it down as you know, it can be as tight security wise as as you want it to be.

Speaker 1:

Yep. Yeah. I think I think the only thing about data and what I've seen is there's there's one called Po. It's it's for Quora. Everybody has see has probably seen that website or visited it one time or another.

Speaker 1:

It's kind of like a Reddit, but it you could people answer questions. Basically, they use their AI to take questions and answers from other people and then, you know, post that as their answer. So before people were getting credit for, you know, answering those questions and seeing as the expert. Now with with whether it's Google or somebody else, if they're not referencing who made that that comment or that answer Yeah.

Speaker 2:

Yeah.

Speaker 1:

I think it's completely unfair because you don't know who, you know, where it's coming from, the source. Yeah. And, you know, not just credit for it is is really concerning too for the people that actually are creating the content and are taking the effort to do something like encore where it is free. So I think

Speaker 3:

Yeah.

Speaker 1:

That that that will be

Speaker 3:

a different side to it. I mean, just to sort of, you know, clarify for myself here. Like, I think Bert and I were jumping on to, I'm a company with data, and I need to, you know, get the AI to ask questions on the answer questions on that data, and I wanna send that data out. That's where Bertie and I were going. You're more talking about data in terms of that's really out there, like you've already published it, you've asked questions before, or you've created art or you know, you've you've written a book or whatever the case is your materials out there.

Speaker 3:

Now AI is using that to train itself. There's nothing you can do. You didn't come to that agreement. That's a totally separate side of data concerns. You know, these from like, you know, business perspective, but just as valid.

Speaker 3:

And it's interesting. This is a bit of a breakaway. But even with stuff like AI, or it's there's a huge backlash against that, because so much of it was used to train their models without any consent that has sort of scraped a whole bunch of huge arts insights and just sort of pulled that up. And there's like really negative perception on that. Yeah, totally different.

Speaker 3:

Big big topic to get into. Yeah. I was just wondering why.

Speaker 2:

Wow. That's that's controversial for the first, first podcast. Yeah.

Speaker 1:

We can jump

Speaker 2:

to the That is the

Speaker 3:

Yeah. You wanna jump

Speaker 1:

to the second one, or you've got you wanna conclude on this first

Speaker 2:

topic? No. Okay. I mean, I think we you've said probably from a business perspective, it's Mhmm. Yeah.

Speaker 2:

It can be as secure as you need it to be. There are plenty of options out there.

Speaker 1:

I think for so for the second one, is a myth of fact. Our business is so complex. The amount of customization needed to set this up would be prohibited.

Speaker 2:

What are

Speaker 1:

your thoughts

Speaker 2:

on that? Interesting one because, I mean, we've seen we've seen setups that are particularly diverse in terms of technologies used, softwares software that that's been, you know, implemented or different applications. But what's nice about this, especially with, you know, everything in the cloud is, you know, applications can be linked through a sort of central resource. So you can think that if you're using you know, if we're talking about solution that uses an NLP, You can have all your your data in in a sort of single, you know, center that can be used and integrated across, apps. So, you know and the integration points can be really simple.

Speaker 2:

So you can have that sort of that knowledge being spread across a diverse set of different applications fairly simply. I guess it it, you know, it depends. But I think it's often you know, you often think, well, if you've got 20 products, do you need 20 separate instances of, you know, of this? And are you gonna have to duplicate that effort when, really, they could be connected on a single kind of technological thread.

Speaker 3:

Yeah. I'm I'm kind of being this question a little bit differently because, like, first of all, like, it's such an ambiguous question. This is, like, a very unfair question. Like, our business is complex. You know, the the the customization need to set this up will be prohibitive.

Speaker 3:

Well, like, well, what are you trying to set up? Right? I mean, AI, again, is such a huge topic. Like, what are you actually trying to do? That would be the first question.

Speaker 3:

It could be something really simple, or or could be something really, really, really advanced, like, if you want to get, you know, your documentation trained by AI, pretty straightforward, if you want to try and do predictive analysis, using AI, which is machine learning, on future, you know, costing trends and market analysis, that's really complex, so it can scale wherever you want. So really ambiguous question. But what I do want to add to that there is that the tools that are out there that are available to like integrate AI into this have become so much better. As Billy was saying, like, a year ago, we were trying to get something working with some data, It was really tricky. We had to still work around it within, like, 3 months later, like, there were tools out there, that were doing exactly what we wanted to do.

Speaker 3:

And just the way that we move the industry is really moving, especially with AI has always been a very, again, scientific kind of niche data scientist industry to be in. But because it's kind of expanded out, and this is really what's created this whole AI room is not really just the technologies that are available. The SDPT is amazing, but, you know, NLP has been around for quite a while. This is the ease of use of actually being able to harness that power and the tools, the APIs, the infrastructure they're putting in place, or the cloud infrastructure that's available, that's making, sort of even if you have a very complex scenario or complex business, integrating with those external tools has become really simple to set up. You don't need to be a advanced data scientist to do it anymore.

Speaker 3:

You just need to understand what's capable. I don't need to understand the inner working of what this model is doing. I can just pick a model from from a service listing, decide which one is the best for my use case. It creates the endpoints. I just, you know, send through my data and parameters, and it gives me, you know, an output return to satisfy wherever that small business use case is.

Speaker 3:

So, yeah, I would say that's pretty much my answer. The tools have become so much better. It's so much easier, and this is continuing every day. Even even with Azure, like, they they bring out new tools every time I visit that site, Azure AI Studio or OpenAI Studio with new tools there every day, and it's just getting better and better. Even complex, previously, machine learning problems, which are really complex, you can now just sort of anyone could do it.

Speaker 3:

Mercurio, you know, you never looked at it. I guarantee if you were just been, like, 2 weeks in it, you'd be able to get it going to the documentation. It's that simple. Tools are available. It's like

Speaker 1:

Maybe maybe

Speaker 3:

to scrape that really well. Maybe one. It's it's actually really amazing. But but yeah, maybe maybe an hour. But yeah, that's that's sort of my takeaway from that.

Speaker 2:

I think you you summed it up well. Just the fact is if we're talking a year or 2 ago, yes, it, you know, it it probably would be, you know, prohibitive. But there's so many options now. And barrier to kind of understanding and to access all of those has come down.

Speaker 3:

Yeah.

Speaker 2:

And that's I think that's an important point to to really highlight.

Speaker 3:

You can we can literally ask chat tpt how to do, like, how to do something complex. Like, you can ask AI how to make integrating AI easier, and you'll actually, yeah, make easy ground at that. Yeah.

Speaker 1:

Yeah. I think just from my perspective, I look at it from a if your business is trying to integrate it within your own product and you want to customize whatever you've integrated, I'm looking at it from a user experience perspective. You know, we've been asked to improve on how to, you know, whether that's entering prompts, how the re how the, output looks because it's very text heavy. But if you want, to sell it and and customize it where you're selling it, it's like a b to b to b, situation. And so we've helped that output look more user friendly.

Speaker 1:

The inputs that you do, you're not typing away. You're not reading it at paragraphs and paragraphs of information, but something that is improving the user experience so you can get to the the questions quickly. You can understand and then group the the output of those questions, more effectively. And so that's what we see as far as customization of of what's important, especially when you're integrating it within your own product.

Speaker 3:

Yeah. Definitely. I mean

Speaker 2:

Without going into the UI too much, I see this is probably the the most exciting development is the fact that we're going to get closer and closer to a conversation UI for complex business applications. Mhmm. And that is transformative because, you know, you no longer think of business functions I send to a specific screen. You break down so many of the complexities around using software just by being able to interact with with software through something like an NLP that that's linked to automation. Incredibly powerful.

Speaker 2:

It's probably probably discussions for another podcast, but

Speaker 3:

Yeah. I want to put

Speaker 2:

that out there because it is it is exciting stuff.

Speaker 1:

Yeah. And I think, you know, we could jump to you know, I wanna combine kinda 3 and 4 points of our myth and facts, but I think they're kind of related as far as, you know, the cost of setting it up as excessive. And obviously, having the lack of expertise within the organization, I think they kind of go a little bit hand in hand of how you what is the cost of integrating? Do I need to hire new people? Do I do I bring on consultants?

Speaker 1:

Do I bring on Microsoft or Google, whoever is integrating it? Do I bring on that their team to do it? Is that cost effective? What's your thoughts on that?

Speaker 2:

Yeah. I think I think the setup being excessive, I think you need to look at it in in number of different ways. So what is it again, it comes back to Joe, what you're saying earlier. What is it you need to do? But, also, what products are you looking at?

Speaker 2:

Because we've seen products that, you know, that have integrated AI, and they charge a fortune for what is often possible just using tools directly from, you know, a a big software company like Microsoft. So, you know, you can cut down the cost hugely if you just have a small investment, you know, in in people that understand, you know, what they're doing with with that technology. I also think that there's something to be said about how implementing certain technology will affect your and that's a big one, and it probably overlaps one of the points we have a bit later. But, you know, what are you going to be saving by by implementing this? If you've got staff that's going to allow them to work x percentage more efficiently, what does that look like in terms of your saving?

Speaker 2:

You know, what new business are you gonna get from that? So I think that cost can be offset on on a lot of the the value adds that implementing something like that offers.

Speaker 1:

Yeah.

Speaker 2:

I would say I agree. I would say that's that's at least the the cost of setting up is is excessive. I think that that's busted in terms of the you know, if you're using the right tools, it doesn't have to be. There

Speaker 3:

are a lot

Speaker 2:

of really you know, it even goes down to what model you're using if you wanna go with an NLP approach. Use a slightly older model token. Cost is very low, and it can still offer Yeah. A massive amount.

Speaker 3:

Yeah. Yeah. From the lack of expertise

Speaker 1:

Yeah. I was just gonna say with expertise and I mean, you know the firsthand kind of the the the background expertise. It's not everybody's gonna say, oh, well, I'm a prompt engineer. Give me, 500,000 a year or some crazy thing that I've I've heard, you know. But, I mean, what is it as far as the expertise that you see that, you know, you need to to have some kind of success with this or or, you know, not hiring the wrong person who thinks they're an AI expert on their resume, but they really aren't?

Speaker 3:

Yeah. Or just

Speaker 1:

like do you think

Speaker 2:

do you think?

Speaker 3:

Yeah. Yeah. I was just gonna say, you know, as with as with anything, like, you need to trust this is the person that's giving you the advice. Right? And that person, especially in the AI space, isn't a space where there's thousands of solutions out there.

Speaker 3:

A lot of them are just either vaporware or not really gonna solve the use case and need someone to really navigate through that, understand what you're trying to accomplish and then getting the best tool for that. You do need that experience, I think, especially now because it is so much out there. And the the cost of setting it up there, you know, again, obviously, variable. But as I was saying earlier with the tools available, I think it is quite quite quick to get some PSCs going around this and to do a very quick implementation. AI isn't something that needs to, you know, integrate into every facet of your organization to work.

Speaker 3:

It can find very niche areas where you can just sort of plug it in or put on top of some data and just see what it can do. It doesn't need to be a huge exercise. It can be a, you know, a a period of a few months where you just sort of understand the landscape, understand what's there, pulled out some prototypes to, again, just to target those specific scenarios and make sure that they can work with them. And then you can always expand it out from there. It doesn't have to be, okay.

Speaker 3:

We're going to integrate AI. Let's set up a year project, and at the end, we'll have integrated AI. It doesn't need to go down that road at all. It can be very, very much like a step by step, stage by stage process, understanding things, picking the best tooling, getting our input into what that would be, the custom bold, the infrastructure that you would need. And again, because there's so many great cloud services out there for you, you don't need to, like, buy all these different sort of software packages and platforms.

Speaker 3:

You can just use pay as you go services and just sort of try out these different scenarios, try out different POCs, see what works for a problem, and then go from there. It can be really, really sort of, easy and effective to get these up and running. It doesn't need to be, some huge undertaking.

Speaker 2:

So what are we saying in terms of a company thinking? Well, they're lacking the expertise to actually pull something off. And I think from what you were saying, it it's really I suppose it it depends. If you've got someone that, you know, is is fairly knowledgeable with with solutions, potentially, but there also I have teams that you could engage as well, you know, with to provide sort of more solid advice around, you know, what you

Speaker 3:

should proceed with. That's exactly it too. Even if you have someone who understands AI quite well, understand certain technologies really well, they could be completely correct in what they're saying and the advice that they're giving you. But the landscape is so huge now. Almost feel like you need to actually get a few opinions on which of the best technologies to use and what's out there.

Speaker 3:

It's evolving so quickly. A single person cannot understand the entire landscape. You can understand, like, I think we have a very good understanding of our niche areas where where we've used AI and what's available there. And I think depending on what you're trying to accomplish, if someone is understandable in other aspect of AI, it doesn't necessarily mean it's even going to slightly translate over to what you're trying to accomplish with a specific problem. And throughout the pitch, you reach out to people who who have experience in in what that is, and then, yeah, discuss with them, get that get that perspective to You

Speaker 1:

ready to jump to the next one?

Speaker 2:

Why not? Alright. Alright.

Speaker 1:

I've heard MLPs may not be that accurate. We can't risk having inaccurate answers. We've heard a little bit about yeah. We heard a little bit about this with it could be whether it's a law firm, you know, they're looking they're doing using AI as research, and getting the wrong answers. It could be I mean, if if you don't have the experts looking at this at the output, you know, who knows what kind of, what do they call it, when when the AI, you know, basically is, making things out.

Speaker 1:

Hallucinations. That's it.

Speaker 2:

And and that's it. I mean, that's where it's been it's been so interesting, especially with the the work we've done, kind of working closely with different NLPs and understanding also seeing the progressions in terms of accuracy, how those models have been tweaked. But also, this is going to the prompting, and and, you know, I'm a firm believer. And even if you've you you're working with an older, cheaper model, you know, to to match your cost, if your prompting is correct, if it's watertight, you can lock down that accuracy to a high degree. And, you know, it it depends really just how interpretive you want it.

Speaker 2:

So I think that's a point to kind of

Speaker 1:

No offense to any AI models that are listening right now if you we didn't call you cheap.

Speaker 3:

Yeah. I mean, my take in accuracy is is right now, like, you you couldn't a 100% go, I'm going to trust AI completely with at least with the NLP side of things. And if I was running a hospital and it was telling me to do something, you know, no matter how well you've tweaked that. And again, as you've been saying, the tools are improving year by year. It's getting better and better even if you have a 95% accuracy rate, though.

Speaker 3:

That 5% opinion in business can mean a lot to you. But I think the the purpose of AI and really the the advices to that outside of it are that 95%, what are you getting out of that? Yes. You probably still need to check over the answers currently to verify them. Right?

Speaker 3:

Ivan, I think it's irresponsible. Even if we feel and even if we've gone through all our training and we're getting, like, a 100% accuracy rate, even if we did that, I still wouldn't want to say, yeah, you can just blindly trust it. I think at least as we stand now today, you need to sort of have someone verify those and look over them. But again, that 95% is getting it right. And just the the effort of reviewing what it's saying is saving you so much time and so do you having to go through everything yourself.

Speaker 3:

If you can take that review that it's given you and it's summarized it, and you can just quickly look over something that's but it's based on a very complex problem, and it's giving you a very simple answer. And it sort of summarize that processing load that here as a person we've had to have gone through to sort of maybe verify that whole document and put it into a really simple, easy to read format that you can very quickly verify. That in itself is, like, a huge time saving. And, you know, as time goes on and, you know, eventually, you will get to a point where you can start just, you know, going fine. I can actually just trust this with everything.

Speaker 3:

That's your choice. But still, I just think that the whole accuracy sort of thing shouldn't be overlooked as to what it's doing to even get you to that point where you're it's it's sort of giving you the the answer that you can then take or and and move along with. It can do a lot

Speaker 2:

of that interesting stuff

Speaker 3:

for you.

Speaker 2:

How you're using and and viewing an NLP. So, you know, if if we think about really the benefit is being able to understand your intent and being able to look at information and summarize it based on what you're asking. So if you're able to provide and, you know, there are different scenarios where you could be pulling in fixed data that it has to reference, and that data is solid. It's not saying that you should rely on it to interpret everything and bring back that. I think in terms of interpretation, that's where you can only go so far with the prompting.

Speaker 2:

But if you have concrete you know, if you're asking the question, it's interpreting, and it's using automation to pull back specific data coming from a database, something that's verified, I think then, you know, there's a high degree of, yes, this is this is accurate. But if you're you're relying on it too, well, this is the formula I want you to calculate, and it's you know, you must use whatever you can to calculate that within the NLP, then, yes, it's probably probably a bit risky. But I think there are ways to, lock down that accuracy and actually, you know, use the power of that its ability to summarize and and sort of sort through information and return a more contextual answer to everyone sort of benefit by combining technologies.

Speaker 3:

Yeah. But yeah. And if sorry. I'm a go ahead. Go ahead.

Speaker 1:

No. Go ahead.

Speaker 3:

There's also 2 other sides to it. I think you just you sort of talked about them both in terms of accuracy. Right now, asking AI to do some processing for you, like, try and come up with a math like, you know, an answer to a market problem I have. Right? Like, you know, what are what's my forecast looking like next year?

Speaker 3:

What are my numbers looking like? In terms of that accuracy? Again, I would say, you know, AI is not good. That's how things trying to work through numbers, it can do it, but I wouldn't rely on it. But again, the tools that you can plug in to ensure that accuracy.

Speaker 3:

So the AI isn't doing it, but it is integrated with an API that can and it's passing through rate clear parameters. That's a very logical flow to turn that on. So if you and it returns it to the AI and AI gives it back to you. They are still doing all the work for you. So it's using very accurate tools to give you that answers, not just trying to magically do it itself.

Speaker 3:

So, yeah, just to add to a point that you can bring in external tools to make it accurate, and that you can validate and test against and be quite confident. And then the other side of it, which is kind of like a UX side. You know, when you get that answer back, how can I quickly verify that answer? You know, looking through citations, is there a way I can quickly see, okay, it's given me this answer. Can I quickly check through?

Speaker 3:

And what does the UX look like to be able to validate that, go through all documentation that say it was trained on and to sort of pick out those pieces and show me, okay, this is where it actually got it from. Yes. That is accurate. Like, I can see the sources pulled from that looks right to me. And, you know, again, that process takes like 1 minute, whereas if you were to review the entire document yourself would have taken like 3 hours.

Speaker 3:

But, yeah, it's it's just still a hugely valuable tool to be able to do that.

Speaker 1:

So don't be lazy when you integrate it. Don't think it's gonna be the the one I'll be I'll answer is what we're saying.

Speaker 2:

But I think

Speaker 1:

I think just a a final point so we could jump to the next one is, you know, with any new technology, the printing press, you know, calculators, Excel, Internet, computers, how accurate, you know, how how confident are we are with the accuracy of what's being outputted with that certain technology. You always have to check. And you again, you can't be lazy. Yeah. It's going to help you with a lot of things.

Speaker 1:

It's going to help you streamline that process. But you know, the technology was never perfect to start off with. So making sure that

Speaker 2:

That's it.

Speaker 3:

Yeah. Sorry. I was glad. This was a clean place there. Like, you're starting to use this technology.

Speaker 3:

And as it evolves, because you've integrated this this part of the process into your workflow, as the technology improves, and that accuracy goes up and up and up, you don't have to suddenly change your business and pivot around. Okay. Well, I've been doing it, like, you know, really old way this process flow, like, you know, was very manual and stupid, you know, relied on people. I heard AI was out, but it wasn't accurate, so I didn't take it on. You're already falling behind competition, they are taking it on, they're way more efficient than you.

Speaker 3:

But the bigger problem is that when it reaches a point where it wouldn't say, wow, this actually amazingly accurate. Those companies that went ahead with this can very easily switch over. To those new technologies. It's really part of their process. They've been bolted into applications, into their workflows.

Speaker 3:

It's very easy then to improve their accuracy over time. So it improves. You can move along with that. If you're just sort of not wanting to touch it at all, you're just falling further and further and further behind. And when you do want to certainly integrate it, certainly you find yourself in a position where, you know, you have to do a huge mammoth task to get it in when it could have been a simple process just to start getting it in now to start moving in that direction.

Speaker 3:

May not be perfect, but you're moving with where the technology is going. You know, just sort of staying behind and and finding yourself, like, way behind everyone else, where you actually bench the one to get through.

Speaker 2:

And that's something I think to add as well is kind of a caveat. And what we've seen with a lot of products that go to market is that you have no control over that. And that's a lot of the risk when you're actually making a decision between, right, do you do you get someone in to build something custom, or do you take an off the shelf product? What we've seen with a lot of the the off the shelf is you have no control over the accuracy over a lot of the the settings that would would control how your answer comes back and also if you wanna change that model. Whereas you go to the base tools, you can change to a completely different NLP if you do test and find that suits your needs better as this evolves.

Speaker 2:

I think it's about sort of setting yourself up to be as flexible as possible to kind of

Speaker 1:

stay in

Speaker 2:

line with the best best solution, you know, as you progress. Because there is an investment in this, and there is often a long term investment because, you know, you're you're building, you know, whether it's content around it or data around it, and it's, you know, being able to to shift with, you know, with whichever company is actually providing the most accurate and the most capable Yep. Model to use. So what do we say? That one's busted.

Speaker 1:

I think so. Maybe. Yeah. Okay. Let's jump to the next one.

Speaker 1:

I think we answered a little bit on this, but,

Speaker 3:

We did.

Speaker 2:

We did.

Speaker 1:

You know, this is this

Speaker 2:

is more about the Let's do it.

Speaker 1:

Yeah. Let's do it anyway. I mean, well, let's we could go through it quickly. We use many different applications. How could it possibly integrate?

Speaker 2:

That's kind of maybe we were jumping the gun earlier where we're looking at sort of, you know, businesses having complex scenarios and, you know, setups. Amazing. I think it is I mean, coming back to those points that we raised is really it can often be a a central you know, you can have a sort of central resource that can still integrate with a number of different applications, and you don't necessarily need to be building instances for each application. Obviously, it depends on the use case. But, you know, I think it's not necessarily it doesn't need to be thought of as something that needs to live in your particular deployment.

Speaker 2:

You know, if you think, well, we've got 20 products we've deployed, now we need to deploy something in with each of those, not necessarily. It can be integrated and linked to a single resource. So I think it's it's flexible in terms of the the infrastructure opportunities that AI solutions offer at the moment or are offered for them.

Speaker 3:

It all comes down to data, especially with, you know, again, thinking of specific scenarios of AI, but data really is the key. And as long as your data is in a location or is in a place, or even if you have multiple sets of data, as long as we can pull that all in into a central location to sort of analyze and work with, you know, then you can sort of, you know, do whatever you want with AI. It really relies on the data, and then you can integrate it into any of your applications, 1 or 2 of them. It doesn't have to be, you know, all of them at all. It can live outside your application completely and be its own thing.

Speaker 3:

It really is just sitting on top of your data. So it's really about your data infrastructure, where your data is sitting. And if your data is all over the place, that's probably not good to begin with. It's probably a good idea to actually take a look at that because, you know, as we're moving forward, these technologies outside of, you know, NLP is just machine learning, even just general data science. Right?

Speaker 3:

Just doing general reports and everything. Most companies have already moved into sort of central areas where at least there's a place or infrastructure that's available there for you to access all that data. And if you have that, then, yeah, AI can integrate really easily. It doesn't need to be on, you know, an application level. It can be a little lower, closer to your data.

Speaker 1:

Cool. So I think And with the the next one, and this is more around, can we trust this technology? Will it take over the world? Trust trusting, I I think, in this terminology is not whether it's accurate. It's actually, can we trust it to not do something, I guess.

Speaker 2:

Oh, you're going to integrate it, and it's suddenly going to rewrite your website

Speaker 1:

and Yeah. Exactly.

Speaker 2:

Take over, place orders on you. I think this is a great sort of conspiracy theory one because I think Mhmm. People love love to go down that route of of science fiction. I kinda feel if you look at where where a lot of people are thinking are is a general AI intelligence, which, you know, we're we're many years away from, but it's being very carefully handled because that's the sort of thing that people are scared about. Is there a human level intelligence that at some point is is almost sentient and, you know, we would have to, you know, look out for it.

Speaker 2:

This mainly what we've been talking about is is NLP's, and it's, you know, it it really is just what you could think of as an autocomplete mechanism. It's being trained on absolutely everything. You're asking it, you know, a specific question, and it's going, okay. Well, just like you would expect to

Speaker 3:

It's not going. It's not going. Okay. It's not thinking. It's not going.

Speaker 3:

Okay. Let me let me think about it. Going.

Speaker 2:

Okay. Yes. It's

Speaker 3:

not taking anything. Do I like Yeah.

Speaker 2:

Exactly. So like like we would have thought in,

Speaker 3:

you

Speaker 2:

know, with oh, yeah. Exactly. I mean, what you see if you're typing into Google, I'm looking for a restaurant, and it may say near me, and to auto complete. It's really doing a a similar you know, they're very good at understanding what you're asking and being able to piece together the responses and having a handle on how to use language in a way that, you know, can sometimes I mean, even they do it so well. It it's understandable that Yeah.

Speaker 2:

You know, people are sold. I know there are people that have been in, well, I've heard people sort of in relationships with NLPs, and they're being lead sort of software engineers that have said, don't unload this. It's living because it does its job that well. But it really is just a set of kind of retaining context of what you've been talking about and being able to just carry on that sort of order completion. So I think we can say, unless we're we're unaware of some model that's about to be released, just bear in mind, this is 2024.

Speaker 2:

If you're watching this in 2028, could be different, could have taken over the world already. Who knows? But at this stage, no.

Speaker 1:

Yeah. Skynet has taken over the world. I think it was a Netflix called Killer Robots. It's a scene where somebody I think it was some scientists that were using it to find cures, and they they flip something from a positive to a negative, say, okay, now create the the worst, virus you can think of. And just using an old MacBook, it was able to produce new molecules, some of the most dangerous things in the world within just 24 hours.

Speaker 1:

Cybersecurity, I know we've got clients that are using it, and how that trains to defend can also be trained to attack. So while it may have a one-sided type of thing where it's not sentient, the purpose of it can be used to Yeah. To really have somebody take over, which is could be dangerous.

Speaker 2:

Yeah. Absolutely.

Speaker 3:

Yeah. Yeah. From that perspective, it's a tool. It is really powerful. And, you know, humans, we we don't have a good track record of of using powerful things in safe, easy ways.

Speaker 3:

But yeah. At least power comes with responsibility. Yeah. Yeah.

Speaker 1:

Cool. Let's jump to the the next one. Unclear ROI. It's a myth or fact.

Speaker 2:

I mean, that's an because how much are you are you looking into the ROI, and how can you weight your your ROI? You know, what what we've seen is, you know, are you using it? What are the goals? Why are you implementing it? Because I think they can be very clear ROI in terms of, well Yeah.

Speaker 3:

Because I think you know, could you rise. Yeah.

Speaker 2:

Could you scale your your team, you know, to be that much larger, by, you know, making it more efficient by implementing it? What would that mean if you didn't need to scale your team up to take on, you know, more orders, more clients? It it really I I think there can be understanding what the technology can do for your business can be quite clear in how you can calculate your ROI, and it is definitely technology that promotes efficiency. So through efficiency, there's no doubt that there is always ROI there.

Speaker 3:

Yeah. Yeah. Exactly. So it comes down to I mean, I'm just repeating really what you just said. For the purpose that you're doing it, you probably already had an existing task.

Speaker 3:

If you were doing it maybe manually or in different way or using different tools, now you want to try AI, the result will still be the same. Right? What you're trying to actually accomplish will very likely be the same, and you very likely have measurements around that already. If you don't have any measurements, then, yeah, this is gonna be something new, and you're just going to have to sort of see how it fits in and create those RIs and understand them. And then at least you can measure those RIs against different applications of AI, trying to solve the same problem and go, okay.

Speaker 3:

This model works best. So you can still, you know, use those metrics really well. But, yeah, as as Ben was saying, you know, ideally, you have a process flow that we're just integrating into, and, ideally, you have RIs already existing there, and we can just see how much more optimized this is has AI actually made the process. And, you know, it should be pretty straightforward to measure if you have existing ones. Yeah.

Speaker 3:

And I think,

Speaker 1:

you know, it's a little plug for what we do. Obviously, when we go in and strategize product development for our clients, you know, we do what we we call an immersion where we try to understand and identify the risks involved in in, you know, the features they wanna release. Do users really want it? What's the, you know, what are the, you know, priorities of these things? So in creating a road map for them.

Speaker 1:

So I think doing your due diligence upfront to understand what that costs are and what you're trying to do and it and if your users and if your company needs it, you you know, if you do that due diligence up front, you do it right, and you get the leaders leadership involved, it'll help reduce that risk and and identify where is the best place to spend the money. And even if it's, you know, something that you don't need now, maybe it's something you do need later, and so you could plan for that accordingly.

Speaker 2:

Yeah. Absolutely. So I think that's busted. I think there's

Speaker 1:

Yeah. In

Speaker 2:

most cases, there would be a clear kind of return on your investment.

Speaker 3:

Mhmm. Yeah. And if there isn't exercising as creating as RIs, which is, you know, something you would need to do anyway even if it wasn't using AI and you want to implement this new process. Yeah. That's part of business.

Speaker 1:

This next one is gonna be obviously, a a hot topic is that it will NLP's replace human jobs?

Speaker 2:

So we added we had one at least that was slightly controversial. So that's

Speaker 3:

Absolutely. Risk Excluding it.

Speaker 1:

From UX perspective, I'm gonna say no. I mean, I've seen some ugly, website designs from, coming from UI or logos designs are doing okay, but from an actual, you know, it's going to design your whole application for you. No, that's not going to happen. It comes up with some patterns I've seen. There's some new things with, from a, from a Figma perspective, and there's some new things where it could create some really low level, wireframes to give you some ideas.

Speaker 1:

But I think it's probably it's probably gonna be the same with a lot of, certain, technical positions. But I'm I'm curious to hear what you guys think on, yeah, from a development stand.

Speaker 2:

Well, I was just about to go off together and then just,

Speaker 3:

wish you didn't mean what

Speaker 2:

I saw. We won't touch on that. I can't proceed.

Speaker 3:

You need us. No. A message. I still I would share I still to see it. Oh, okay.

Speaker 1:

We'll have AI. Let's edit that out.

Speaker 2:

There was there was a photo of, a a big construction site with a massive burning burning. At least AI won't take our jobs. But, you know, it's it's kinda really funny to see here how, you know, there there's certain things like, you know, you look at the right to strikes. And yes, like, you know, that is there's certain things that are really concerning when you're in certain fields. But I think you've also got to look at how the landscape changes as well.

Speaker 2:

We're talking about prompt engineer positions, and it's really there are going to be new jobs created from this. And there's also the opportunity to become more competitive, which we're all kind of every business is looking for avenues to become more competitive, to cut costs, to streamline themselves. And this tool is all about efficiency. So Yeah. You know, I think, yes, there are some industries that are gonna be affected, but there are other new career paths that are opened up from this technology.

Speaker 2:

And there needs to be a sort of displacement and replacement of sort of, you know, talent.

Speaker 3:

Yep. To me, I mean, it's it's just new technology. We've, you know, gone through us as human species many times before. Cars came along, like, you know, no longer are people sort of there taking the buses around for you. Computers came along, like, that was a huge thing.

Speaker 3:

You know, the amount of efficiency we get out using computers compared to, like, pen and paper. I mean, you know, I think that was a way bigger, you know, change in in dynamic on business, especially compared to what is AI is going to bring. As you said, it's really good at efficiency and just sort of working through really mundane tasks and just sort of giving you the answers that you're kind of looking for. It doesn't really understand how to give you those answers. It doesn't understand what those what the questions even are.

Speaker 3:

There's still human elements that need to, like, really understand what you're working with and apply those AIs. To me, it's still just a tool and how well you use that tool is is how much you'll actually get out of it. If you use a Pearly, then it's not gonna replace anyone because it's being used in effectively. It's it's got a few good use cases. But, yeah, it's not gonna replace all human jobs.

Speaker 3:

I mean, it's, it's it's trained on models of, you know, existing human knowledge as we as, you know, people and species, or as I think of new ideas and evolving, it's not coming up with new ideas. It's just using what's already exists or understanding previous patterns at its learn time. Understanding new things, working through things, solving complex stories. If it doesn't matter about that you find yourself in, it can't navigate around. Yeah.

Speaker 3:

It it it's, I mean, it's it's definitely I'm not saying it's not going to impact jobs. Absolutely, it will. But, yeah, I think it's just technology is progressing like it always has, and it's just another step in that in that sort of ladder of technological process, really. Yeah. You either you either use it correctly.

Speaker 1:

Yeah. You either have to be in front of that wave or you're getting slammed and drowned. You know, if you're staying behind or you're just floating in the ocean as you see that wave, just push everybody else forward. I mean, that's just Yeah. Yeah.

Speaker 1:

I get ahead of it. But I think every, every single job is probably going to be somehow affected by whether you're customer service all the way to a doctor. And those construction sites, you know, I've I don't know if you saw the recent, Boston Dynamics robot, but those things are getting pretty, advanced.

Speaker 2:

They are. Yeah. Those are those are pretty amazing.

Speaker 1:

Yeah. Combine that with AI. I mean, and with the physical, you know, form like that, I mean, that's that that's something that is definitely, could be Terminator or, Irobot.

Speaker 2:

Well, I mean, you look at there was a a recent, I think in China kind of expo on robotics. And they're they're predicting that, sort of these robotic partners, because there's I think especially in the the east, there are a lot of people growing up, that are kind of more you know, they're on their devices, so they're making fewer friends, and there actually is a kind of a loneliness epidemic. And they're they're really forecasting that when these people get to an age where they can afford, you know, they're earning a certain amount, they'll still be lonely, and they will fork out for a robotic partner, which is insane. But you think, you know, just if you've had a conversation with with an NLP, you know, already they're able to hold quite an interesting, you know, conversation. So, you know, that plugged into robotic form.

Speaker 2:

It's very interesting to see where that will go.

Speaker 3:

Yeah. Yeah. Areas which you didn't even expect, I think, will, yeah, be influenced in some way or another.

Speaker 2:

I can do that.

Speaker 1:

So we're talking more Blade Runner and, was it ex machina

Speaker 3:

situation? Exactly.

Speaker 1:

Yeah. Cool. And, I think for our final topic, NLP tools are all the same.

Speaker 2:

No. No. They're as diverse as anything else. But, you know, that that

Speaker 3:

is the you

Speaker 2:

know, one person may try a chat gpt or a bard, and they're like, no. That's not what we want or not. And, you know, they're missing the opportunity of, you know, they're they're not just different NLP's, you know, that have been trained with I think, Joe, you brought it up earlier with with really narrow focus that, that do one thing incredibly well and sail everything outside or broad focus that do does everything relatively well. But, you know, you've got all the different setups as well of how those are built. And so they are so varied.

Speaker 2:

And, you know, even again, coming back to the prompting, you know, you can have something do something so specifically, like you're seeing the the plug ins now, you know, with a lot of the chat got a specific NLP that is you know, will speak to you like a doctor or, you know, like a psychiatrist or a sports coach or, you know, whatever you need. So, you know, they're they're they're super varied.

Speaker 3:

Yeah. I think it's worth bringing up the points again because there's so much variance out there and the industry is changing so quickly. As a business, it's just so important to set yourself up in a position where you can switch between those changing technologies really easily to invest all in one and go for some packaged product that just integrates in every facet of your application and you're sort of tied to it completely. You wanna be really flexible and just be able to move around between these tools as they each individually improve, as you sort of come up with new business processes that you need your existing solution to solve, and I can't, they're a problem because you set yourself up and you designed your architecture in a way. Like, it can be very easy then.

Speaker 3:

Okay. Let's integrate this AI into this problem space and this one into this problem space. You you there's no, like, one solution fits all for this. You have to sort of pick individually. So, yeah, just make sure that you're in a position where you can do that, and you're good to go.

Speaker 2:

I think you just jumped ahead, Joe, to the the the summary. That was a pretty That was

Speaker 3:

a great Yeah. I I

Speaker 1:

think that was probably good. Unless you wanna add something else before we, we wrap up, I think it's a good place to probably stop for this one.

Speaker 2:

A good takeaway. Yeah. I mean, that's you know, it it is flexible as long as you're setting yourself up with the correct tools initially. Make that initial investment. Make sure you you you're engaging with someone who knows what they're doing and, you know, is able to guide you to be in a position to have a system that is future proof opposed to running and going with maybe limited knowledge and grabbing an off the shelf, and then you're bound to something you don't actually want or isn't being maintained properly as technology shifts and and and moves.

Speaker 1:

Great. Well, I think it's a let's I think it's a good chance to wrap this up. A lot of good topics, discussed on AI. Obviously, we want, anybody listening, definitely share your thoughts on the topics that we've discussed. Submit any kind of questions or any type of future things that you'd like us to talk about.

Speaker 1:

We've got a lot of experience from our teams, and and we'll be releasing these hopefully weekly based on different topics of product, UX design, AI, and and and anything else in the in the tech industry. You know, obviously do the, you know, thumbs up, rate, review, subscribe to the podcast, and, thanks for tuning in. And, yeah, we're looking forward to, shooting the next one and, talking about new and exciting things. Cool.

Speaker 2:

Catch you next time.

Speaker 1:

Take care.

Speaker 3:

Awesome. Thanks. See you.

Speaker 2:

And Joe is out of there.