Make an IIIMPACT - The User Inexperience Podcast

Welcome to another exciting episode of Make an IiIMPACT - The User Inexperience! 

Today, we have our hosts Makoto Kern, President @ IIIMPACT and his AI Integration team - Brynley Evans and Joe Kraft. We'll dive deep into the competitive AI development landscape, comparing top models like Claude and Chat GPT-4, and discuss the relevance of academic benchmarks versus practical applications. We'll also explore the pressing topic of AI's ability to understand sarcasm, a challenge that remains crucial for improving natural language processing. 

We'll touch on industry trends, including the recent controversies surrounding Adobe and the strategic moves of tech giants like Nvidia in AI development. Additionally, we'll discuss the growing skepticism in the media about AI's future, the importance of reducing latency, and how localized data centers and chipset advancements are shaping the tech industry. Stay tuned as we unravel these developments and their implications on both the tech sphere and broader society. Let's make an impact together!

IIIMPACT has been in business for +20 years. Our growth success has been rewarded by being on the Inc. 5000 for the past 3 years in a row as one of the fastest-growing private companies in the US.  Product Strategy to Design to Development - we reshape how you bring software visions to life. Our unique approach is designed to minimize risk and circumvent common challenges, ensuring our clients can bring innovative and impactful products to market with confidence and efficiency.

We facilitate rapid strategic planning that leads to intuitive UX design, and better collaboration between business, design to development. 

Bottom line. We help our clients launch better products, faster.

Support this channel by buying me a coffee: https://buymeacoffee.com/makotob

You can find us on Instagram here for more images and stories:   / iiimpactdesign  

You can find me on X here for thoughts, threads and curated news:   

 / theiiimpact  

Timestamps:

00:00 Figma's AI-driven features unveiled at conference.
06:24 Concerns about AI using proprietary data sources.
09:36 Claude lacks real-time updates; ChatGPT retrieves them.
10:33 Skeptical of benchmarks; prefers practical real-use tests.
14:25 AI models enhance language understanding, tone, and nuance.
17:29 Sarcasm uses tone to convey opposite meaning.
22:42 Text models tackle sarcasm with contextual analysis.
23:40 Detecting sarcasm in text using sentiment analysis.
28:36 Nvidia's share price soared due to AI pivot.
31:54 Cheaper AI models improve scalability and response times.
34:46 Taiwan's potential as chip technology hub.

What is Make an IIIMPACT - The User Inexperience Podcast?

IIIMPACT is a Product UX Design and Development Strategy Consulting Agency.

We emphasize strategic planning, intuitive UX design, and better collaboration between business, design to development. By integrating best practices with our clients, we not only speed up market entry but also enhance the overall quality of software products. We help our clients launch better products, faster.

We explore topics about product, strategy, design and development. Hear stories and learnings on how our experienced team has helped launch 100s of software products in almost every industry vertical.

Speaker 1:

Which company was trying to buy Figma? Adobe. And so Adobe's been in the news, and they're in the news for a negative reason. The community is in an uproar because Adobe's terms and conditions basically, in their terms and conditions that they try to sneak in is that their AI can be trained on all the data models that customers are using. Obviously, if you're creating something and they're able to use that to generate other people's AI, that's that's the IP that's, you know, very unethical to steal from.

Speaker 2:

The more real our experiences and our conversations are with with these sort of AI assistants or these large language models, the the more we believe that what we're getting back is a real response. Just like if you're speaking to someone, you believe what they're saying, and it's all in that exchange. And I think to get to the ultimate product, things like sarcasm have to be factored in. Because if you drop something that's sarcastic and you get a response that someone really doesn't understand, then suddenly that illusion is broken as well.

Speaker 3:

Because even the g p t four r, I still feel it's not as good as g p t four, just the base model is. I feel it's, you know, trying to come up with answers as as best as it can even if it makes stuff up way more than g p t four based it. It's just trying a bit too hard to just output something rather than saying, I'm not too sure, I don't know, or, you know, try something else. It's just trying to look impressive but not really being that accurate.

Speaker 1:

Hello, everybody. Welcome back to another episode of Make an Impact, with your host, Makoto Kern, and I've got Joan Brinley as well again. How's it going, guys? It's Good to be back.

Speaker 3:

Cover. Yeah.

Speaker 2:

Ready for the next one.

Speaker 1:

Today's topic, we'll probably discuss the the latest news as far as, AI is concerned, and then we'll go through some other latest and greatest things that are happening out there. But, I think to first kick it off, what's been in the news is the is the Figma releases. As UX designers, you know, we're we're always looking and seeing what's what's happening with the with the tools, and Figma just had a config last week in San Francisco, and they had a whole new set of features related to AI. And, you know, that's everything from generating mobile AI kind of, prototyping. They also had where it can actually put together prototypes automatically for you, stitch stitch them together, which is pretty cool.

Speaker 1:

Automatic naming of layers, which is you know, that's a big time time suck for us. And then I think they also had more of the Figma slides. I don't think that's really as as automated, but, you know, that's something that is probably like a more shiny object type of thing that's, oh, this is a cool thing to add if instead of using, you know, Google Slides or or PowerPoint type of thing. But, yeah, not sure if she's

Speaker 2:

really been following the the the Figma updates, but was anything around kind of creativity, or was it more around automation and assistance? Like, that's where I'm fascinated about is where they can actually push the line in creative processes.

Speaker 1:

Mhmm. I think it was more around more just making the design efficient. I I'm still curious because it's not they haven't released everything yet to the general public. They just showed what that looks like. So I'm curious as to, you know, when you're actually building you know, when you're building templates, that's pretty easy.

Speaker 1:

Like, oh, this is a login page or this is a form page. You know, I think some of that is still, you know, kind of up for grabs as far as how creatively it it's gonna create something. It's probably more wireframing. And so I I'd be curious as to how how that's going to evolve. I mean, it's obviously a first step, but I think, you know, with the auto layouts and things like that that are, I think, a little bit more of a, you know, it takes a lot of time to to do and to do it right, that's where it's going to help us spend more time on the creative side and less time on the Yeah.

Speaker 1:

Exactly. Admin side. So I think that's really where it's at. Yeah.

Speaker 2:

I mean, that's a huge help.

Speaker 3:

What are the I

Speaker 2:

think that's always the go for Joe.

Speaker 3:

And I was gonna ask, what are the interfaces like? Is it were you actually, like, typing in, like, with Chat TPT and sort of asking you to do things and it generates from there? Or is it Yeah.

Speaker 1:

That are you actually

Speaker 3:

interacting with Figma?

Speaker 1:

Yeah. You're you're interacting a little bit more with Figma. Some of it's just like a one touch button type of thing. Some of it's more, you're typing and and prompting it to do something. But again, I they have they just showed snippets of it versus you it's not released yet into Figma, so I'd be curious how the actual release is going to look like and how that evolves.

Speaker 2:

I always sort of admired Figma in the way it's right from the beginning, it's just been pushing an amazing tool. So it was always intuitive to work with. So it'd be interesting to see if they can extend that same intuitive sort of interface into AI. Yeah. So it's gonna be interesting to see.

Speaker 1:

And I think I think that's a good jump into what which company was trying to buy Figma? Adobe. And so Adobe's been in the news, and they're in the news for a negative reason. And it's probably a good thing that that sales did not happen with Figma because of Adobe's terms and conditions, and that's been a pretty big, you know the community is in an uproar because, basically in their terms and conditions that they try to sneak in is that, their AI can be trained on all the data models that customers are using. And Mhmm.

Speaker 1:

It's not something that obviously, if you're creating something and they're able to use that to generate other people's AI

Speaker 3:

Yeah.

Speaker 1:

That's that's the IP that's, you know, very unethical to steal from.

Speaker 3:

Interesting. Yeah. Absolutely.

Speaker 1:

With Sigma, they they don't do that. And they said if they will, they'll explicitly tell you that that's gonna happen. So that's a, you know, that's just another

Speaker 3:

It's interesting. It's a it's a general problem space around AI because it's almost like a layer of abstract abstraction. Sorry. Abstraction between the data that it's being trained on and actual output, what it looks like. You know, it's not like you're taking someone's proprietary code and then copying it into some other repository or making it available.

Speaker 3:

It's not very clear like that. You can't easily tell if you're talking to a language model, and it's giving you answers, or this is how this code looks, or this is what this Figma design looks like, you can't tell where it actually got that from. So we'll make sort of mild, subtle adjustments to it. It's not a carbon copy of where it copied it from originally. So it can be very hard then to pinpoint, okay, is this actually, you know, our proprietary, you know, models or data or information it's using or to just generate itself.

Speaker 3:

And even with OpenAI in general and chat TPT, you know, it's it's been pulled from so many sources, but it's hard for someone to go, okay, that's a really good answer it came up with. But, you know, that's very similar to a blog post I wrote, like, 3 years ago. You know, to pull from that, who knows? It's similar. It has similar talking points.

Speaker 3:

But, yeah, very hard to tell. So, yeah, it's a it's a problem space in general. It's interesting to see, I think as people become more aware of that, and they're more aware of how their data can be used to train models that they're going to become a little bit more, sensitive to how it's being used going forward. And as you said, like, when it changes our terms and conditions previously, you know, okay. Sure.

Speaker 3:

But now it's, you know, everyone's watching that sort of thing way more closely. It's so much easier to to sort of pull that data out and use it now it was before.

Speaker 1:

Yeah. So I think the the third topic I wanna discuss is obviously Claude's, Sonnet release by Anthropic. You know, some of their, it's it's funny. Almost compared to when Apple releases a I you know, their iPhone and then Google releases theirs, and they're they're always trying to one up each other. So, you know Yeah.

Speaker 1:

Exactly. Is one up in g g p t 4 in the test benchmarks, and and so we'll we'll see what the 5 point o of Chat GPT will be better. Yeah. Hopefully, it's not a new camera.

Speaker 3:

I'm always I mean,

Speaker 2:

it does it does feel like a a sort of arms race of of AI. And I know I think, you know, Mark Zuckerberg was getting getting stuck in recently. He's like, oh, you guys are without your closed off models are trying to create God. And it is just as you say, it's like always this sort of benchmarks that have been pushed, which is great. I mean, competition drives innovation.

Speaker 2:

So things are going to keep being pushed. But what's interesting about the Claude 3.5 SONNET is some of the benchmarks that they are using, which I was unaware of, things like the undergraduate level knowledge. It's MMLU, which is Massive Multitask Language Understanding. So that's one way that they help models, I guess, get context and understanding. Then there's the graduate level reasoning as well, or the GPQA, QA, which is an interesting kind of graduate level Google proof q and a.

Speaker 2:

So a whole lot of experts have sort of put this system together of I believe it's it's almost handwritten notes, I stand to be corrected, on different very difficult academic subjects that it has to then they use that as something to base the reasoning on opposed to facts that you just pull and verbatim quote those. And then the last one about it is the coding proficiency or the human eval, which is apparently a data set of about 164. I think that's probably the handwritten programming notes or problems. And that is used to measure the functional correctness of code that these large language models generate. So it's interesting all the benchmarks that they're kind of using to actually get different measurements of, you know, the the functional areas of of these large language models.

Speaker 2:

Mhmm.

Speaker 1:

You know, it's interesting where Claude, from my understanding, and I've and, hopefully, I'm not missing something, but it does not go out and pull from real time like websites. You have to upload that data, Estelle. ChatJPT 4 will go out and do that. Claude will stop you and say, I I have I could don't have access to to go out and do that. So it's interesting because I've I've tried to do financial investment type of, you know, analytics for either one of those where it's analyzing, you know, stock picks or crypto or whatever it is for the last year or last 6 months because things vary so much based on the news and things like that versus 2 years ago.

Speaker 1:

So that's where Claude will say, well, I don't have I only have records up to this, you know, this year, which is I think 2 years ago. And but ChatGPT will go out and actually scan a lot of things and then analyze them, which I think is far more superior potentially if they're accurate.

Speaker 2:

I wonder what the reason is for that.

Speaker 3:

I think I mean, I that's a really good point because again, the the benchmarks that they sort of show and demonstrate, you know, the capabilities, I'm I'm always like skeptical of those benchmarks because it's it seems again, like very academic. Like, they're asking very specific, you know, as you say, like graduate level questions or specific scientific questions, expecting specific answers coming back, which is, which is all good and well. But if you train your model very specifically to try and target those benchmark kind of questions, you can kind of fudge the numbers a bit, I feel. If, you know, it's very easy to do that. So it's more happy, moist and more practical tests around those sort of questions.

Speaker 3:

So sorry, around those benchmarks, like actually actual real use, because even at GPT 4, or this is obviously completely anecdotal, but I still feel it's not as good as GPT 4. Just the base model is just my own personal use with it. I feel it's, you know, trying to come up with answers as as best as it can, even if it makes stuff up way more than gpt4 based it. And it's it's just trying a bit too hard to just output something rather than saying I'm not too sure. I don't know or, you know, try something else or, you know, I need more information to answers for you properly.

Speaker 3:

It's just trying to look impressive, but not really being that accurate. So I would be interested to see if you have some some tests come out that are a bit more, sort of practical based. And then as also Mercado is saying, when we're comparing these models, there's a bit more to them than just the model itself. Like, sure. Okay.

Speaker 3:

Let's say we did come up with some metrics, and we can see why models lot better at generation than the other. But what about the tools and actual company itself? I mean, with with OpenAI, you have all image capabilities baked out in, as Mikael said, the Bing search will go out and pull information in, has a data analytics inside it allows you to upload documents really easily to it, has all these other tools make it really make it way more user friendly. And, also, from a developer standpoint, like, the API is really easy to integrate with, to set up accounts, to set up billing. And so I think when you're looking at sort of determining or should we search to another model or not, I think it's, yeah, definitely a lot more connotations than just pure answers that are coming back.

Speaker 3:

It's like you kind of have to look at their entire suite of products. And what OpenEye is isn't just over the chat gpt model. It's really all the tools and services that building out around that that really makes a a powerful sort of solution to go with. Yeah. I think it's there's always quite a bit more to consider there than just looking at the models themselves.

Speaker 3:

Again, it always depends on your use case, but at least from my perspective.

Speaker 1:

Yeah. I've been seeing a lot more negative type of articles around the it's a bubble. It's not as it's not really AI as as you think it is. There's just what you're saying maybe is around some of that testing. Yeah.

Speaker 1:

So I'm I'm curious where some of this you know, obviously, negative articles could be more clickbaity, but I'm just curious I'm just curious as to, you know, what are they yeah. What what where is the focus going to be next? Is it just continuously improving on those algorithms or models? Is it is it addressing a specific area of is it visual, like, yeah, visual reasoning? Is it more of the analytical side of things?

Speaker 1:

Is it more, deciphering, you know, the data that's out there?

Speaker 2:

I think multimodal is is definitely kind of, I think, where it's going. Just having vision, sound, text, all that reasoning, I mean, just everything. I was just thinking, Abi, with you, saw the headlines of the new AirPods. I think they're thinking of putting cameras in there. Oh, really?

Speaker 3:

I know.

Speaker 2:

Do you think

Speaker 3:

That's like a

Speaker 2:

That's really just, you know, where where they're obviously just going to try and get out everything that's possible to get visual information

Speaker 1:

because it's

Speaker 2:

so important. And I kind of wanted to touch maybe that's a good, good transition into what I wanted to chat about, if we're going to get finished on that.

Speaker 1:

Yeah. Let's do

Speaker 2:

it. But AI and and sarcasm, just because we're talking about large language models so much, but it's interesting just to understand the intricacies of language and especially that the model is mostly text based at the moment and how much goes into actually helping them understand things like sarcasm. So we're talking about, Claude, you know, 3.5 Sonnet and, you know, how that's, you know, sort of trying to one up, chat gbt4.0, and, you know, they're they're obviously both really good large language models. And, you know, Claude's saying it's demonstrating enhanced capabilities and understanding things, you know, specifically like nuance, humor, complex instructions, and also sort of excelling at producing this high quality content with a more natural tone. And it's all really about this sort of this tone and and being and being natural, I guess, and coming across as natural and believable.

Speaker 2:

And, so sort of closer to home, I think, on on Impact's projects that we're working on, one specific client we've been working with to to really help personify their their AI experience, their AI assistant, to, you know, really sort of tweak the the prompts or, you know, using prompt engineering to really guide its its reactions and, you know, trying to keep it following a brand voice, so to speak. And I think just in that process, you sort of take for granted what cultural nuance there is in language as well, how tone is used and how important context is. We're talking about visual. I'm watching both of you because we're used to visual cues when we converse. So when you just have text, it makes it really difficult to understand some of those really subtle cues that you're picking up.

Speaker 2:

And if you're conversing with an AI, and obviously the more real our experiences and our conversations are with these AI assistants or these large language models, that the more we believe that what we're getting back is a real response, just like if you're speaking to someone, you believe what they're saying, and it's all in that exchange. And I think to get to the ultimate product, things like sarcasm have to be factored in because if you drop something that's sarcastic and you get a response that someone really doesn't understand, then suddenly that illusion is broken as well. Even if it's speaking to someone where you crack a joke and the person doesn't get it, you're kind of like that kills the conversation in a way.

Speaker 3:

Yeah. Oh,

Speaker 1:

yeah. It's not gonna kill us in the future. Exactly.

Speaker 2:

Exactly. So that was that was something that I that I was gonna joke about. I was saying I quickly used Google AI Labs to ask about AI and sarcasm examples, to which you replied, I'm unsure of why people are scared of AI. It's not like we are going to

Speaker 1:

take over the world,

Speaker 2:

so you beat me to that. But, of course, that's a really bad joke. But, you know, if we look at sarcasm and and kind of what a completely counterintuitive use of language it is, and if we look at we go kind of back to the definition of it. If you look at it's remarking on something that clearly means the opposite of what you're saying or at least the words you're using in order to hurt someone's feelings or to criticize something in a humorous way. So you'd say, for an example, you know, if someone really there's a team that can't get anything right, you'd sort of go, oh, they're definitely on top of it.

Speaker 2:

And there's certain inflection and insinuation that you know it's sarcasm because of the way it's delivered. And there's certain sort of intonation, the changes in pitch, you know, that obviously convey that meaning and emotions behind the words themselves. So, you know, if I had to say something like the release date is tomorrow, you go, cool. That that's a fact because it's got a falling intonation at the end. The release date is tomorrow, and it goes down.

Speaker 2:

And you go, cool statement. If I go, the release date is tomorrow, and it goes up with a rising intonation, then you're like, wow. He's asking a question. But that's completely lost with, you know, with any sort of text based format. You know, it it's obviously a big problem that these text large language models just, can't hear, so they can't pick that up.

Speaker 2:

And when you're communicating with them, it's very difficult to understand things like sarcasm. So I'm going to drop something on the chat, either Makoto or Joe. Just read this out.

Speaker 3:

Perfect, Makoto.

Speaker 1:

Really interesting topic.

Speaker 2:

So so there we go. So, like, Noble, you deliver that fairly normally, like, with no sort of inflection. Although, maybe there was a natural sarcasm. You can get that. I'm not sure.

Speaker 1:

That's just me.

Speaker 2:

Anyway, like, you get that. You get that, and it's a normal statement. So you go, okay, it must be a really interesting topic. But again, with that inflection, you could go, really interesting topic, where, you know, you could the intonation could point to that sarcasm. So just, again, the way the words are delivered in intonation, you can completely negate the meaning of the statement.

Speaker 2:

So the words are saying one thing, but you've turned it on its head and it's become something completely different. And I think even for humans, it's It's difficult. Kind of a

Speaker 1:

yeah. It's You have to know the person.

Speaker 2:

You really need that additional yeah. You need to have that additional context. So if you said what you said, but you rolled your eyes or something, then it would be like, okay, well, you're being sarcastic. But without those cues, it's difficult. And that's why often you kind of see in fiction, there'll be someone may write something like, really interesting topic.

Speaker 2:

He said, is he sighed disappointingly? Yes. So you've gotta have that additional context to go, okay. You know, that's you know, so when we're reading, we don't even know what sarcasm is unless we have that.

Speaker 1:

Try I'm trying to think the app and that's Yeah. The applications for that is is interesting because, obviously, when you're just doing text based on your on your with whatever you're talking to, it's probably very more analytical, mathematical, or whatever. It's it's more you're trying to do. You're not really conversing with it or it's it's there's a need to understand that. But if you're utilizing it in a way where you're moderating maybe a social media network or whatever to try to see, did they say something wrong?

Speaker 1:

Did they say something whatever? Or it could be somebody that's, you know, where auto translates if somebody for the cap, the closed captions. How do they pick up on those inflections and do it correctly where it's

Speaker 2:

not? I mean, opinion pieces and things like that as well. You know, how do you know whether someone, you know, could be talking about global warming, but they're being sarcastic at the end and, you know, like, this is great for the earth. Meanwhile, it's not. And the AI can't be going, this is a good thing.

Speaker 2:

It has to sort of be able to pick up that because I think when you look at a lot of those emergent behaviors like being able to summarize things, it really needs to understand that to get an accurate summary. So I think just generally, when you're asking it to do something, you're kind of relying on it to go through, understand sarcasm and where that's going. So, yeah, just looking at, you know, how we use them as well and, how we really sort of we even use things like emojis when we're communicating on text channels. You know? So, you know, you can sort of understand.

Speaker 2:

You say something sarcastic smiley face, you're like, okay. This is joking. Mhmm. So that's, you know, that's interesting. So

Speaker 3:

kind of

Speaker 2:

as we move into the go for Joe.

Speaker 3:

Yeah. No. No. I was just going to say, like, this is really what, ChatCPT or OpenAI's voice capabilities are doing the demo that we saw a few weeks ago, is just understanding all that inflection. And I'll be really interested to see how they actually trained it.

Speaker 3:

Because as you're talking about it now, obviously, we understand it, we can see the demo, and we can see how good it is. But how do you train a model with inflection? I would be really interested to see the actual data that they use to do that. Because do they just feed it a lot of text and say, this is these are like a million examples of someone being sarcastic. So that text, feed it audio lines, and just sort of understands, okay.

Speaker 3:

Well, this is what sarcasm is. And then you feed another 1,000,000 lines of someone being extremely happy and someone being extremely angry, and you just keep doing that. I don't know. But interesting to see what the methodology was.

Speaker 2:

Well, it is, I think because it's, obviously they started with base text models. I'll go into some of the ways that they've actually got around the problem, which I found quite interesting. But obviously, as we get to the multimodal models, there at least is a way around that in terms of there are visual cues. If it's looking at someone speaking and they would do something different there. So obviously, you were saying it needs to be trained on those visual cues, on the inflection, that sort of thing.

Speaker 2:

But a lot of the just the nature of sarcasm, I think it's often delivered so unexpectedly, and that's what sort of makes it harder to pick up as well. So the text based model has obviously gone down this road of almost mapping out how can we understand things like sarcasm. And they use a couple of different I've got 5 kind of points how they actually help understand that. And the first one is contextual analysis. So an example would be someone's going, oh, fantastic.

Speaker 2:

Another meeting at 6 AM. But you don't know again if it's like looking at the text at face value, it's like, oh, great. This person loves early morning meetings. Like, they're excited that there's another one.

Speaker 1:

So he

Speaker 2:

kind of has to go through the text and say, has someone actually referred to early morning meetings or not liking mornings or likely mornings to kind of get an understanding of does that person is that person being genuine, or actually are they using sarcasm? And then the next one is sentiment analysis. So an example there is someone going like, I absolutely love getting stuck in traffic for hours, And it actually has to search there and go, wait a minute. There's a mismatch that's pointing to the sarcasm. So there's I absolutely love, which is positive, but then getting stuck in traffic is a negative thing.

Speaker 2:

So with that mismatch, it's likely sarcasm. And then the third one is training data, which is is quite a funny one. So it can be an example like great job on breaking the printer again. But it would be you know, you don't know that's necessarily a bad thing until you've read through, okay, breaking a print is not a good thing. So in this lots of training data, it's like, yeah.

Speaker 2:

Yeah. The prints are not working is a bad thing. So they're saying good thing, you know, great job on doing a bad thing. So it has to be sarcastic. And the second last one is linguistic cues.

Speaker 2:

So someone going like, oh, just wonderful, another rainy day. So it's kind of, again, looking at those cues like, all right, wonderful and just, no, that doesn't add up. Like, just is you know, those 2, it's obviously Qs for sarcasm. And then the last one would be cultural and social knowledge, where example would be kind of like, oh, sure, because that's exactly how things work in real life. So, you know, it's kind of knowing that people use often use, oh, sure, to preface the sarcastic disagreement.

Speaker 2:

And recognizing that statement likely means the opposite of what's being said and then sort of indicating sarcasm. So it's quite interesting just I found it interesting unpacking those small kind of clues to actually figure out what sarcasm would be and how it should detect it. And that's why this has been in the news recently as well. This has been something that, you know, a hurdle to kind of overcome with these large language models.

Speaker 3:

Yeah. So I mean, it it can make a big difference because if you're talking through these models and it gives you an honest feeling, like, thanks. That really helped me, like, really sarcastically. And if it goes, sure. My pleasure.

Speaker 3:

Let me know if there's anything else I can do for you. It immediately, like, pulls you out the experience. And you're just like, this thing is just not connecting with me. Yeah. But if it's like

Speaker 2:

Yeah. Exactly.

Speaker 3:

Don't think that you are actually happy with that. Let me try something else, or what else can I try to to help you with this? It may go a long way. Yeah. Yeah.

Speaker 3:

Yeah.

Speaker 1:

So do you wanna hear what, I asked Chat gpt to at to answer the Adobe controversy in a very sarcastic tone. And That's what So I'll read the I'll read it back to you how what they said. Oh, Adobe. What is shining beacon of trust and transparency? So in a truly shocking turn of events, Adobe faced some minor backlash over its decision to review user content uploaded to its platform.

Speaker 1:

I mean, who wouldn't want a multibillion dollar company peeking at their private work? Privacy is so last century. Of of course, this little hiccup came of course, this little hiccup came to light when users started noticing that the fine print of Adobe's terms of service allows the company to review and use uploaded content to train their Firefly AI models. Naturally, the tech giant was quick to clarify that they were not, in fact, misusing customer content. They would never dream of doing such a thing except, you know, when they did.

Speaker 1:

To to quell the growing Exactly. To quell the growing discontent, Adobe had to backpedal faster than a politician caught in the scandal. They reassured everyone that their precious artwork and sensitive data were safe and sound, untouched by the prying algorithms of their AI. Sure. We believe you, Adobe, just like we believe that unicorns exist and chocolate has no calories.

Speaker 1:

Oh, that's great. The whole there's there's even more. The whole situation is a perfect example of corporate transparency at its finest. It's not like they would ever exploit user data for their gain. Right?

Speaker 1:

I mean, why would they need to when they could simply invent new features to dazzle us all with? In summary, Adobe's recent controversy has been a master class in how to handle PR crisis, deny, deflect, and then promise to do better. Bravo, Adobe. Bravo. That's pretty good.

Speaker 3:

Yeah. This is good.

Speaker 1:

It's better than I I probably could've done. Yeah.

Speaker 3:

Thank you for your service on LinkedIn.

Speaker 1:

Oh, yeah. For sure. So, yeah, I don't know. Joe, do you wanna jump on your topic, or should we should we

Speaker 3:

Yeah. It's it's a really small topic, so we can just quickly talk about it. Just an interesting sort of development around where AI is heading and to try and get a gauge on that. A really interesting one is just seeing the costs that are going into support AI solutions and and where that's sort of heading. So looking at NVIDIA's share price.

Speaker 3:

So previously, NVIDIA obviously made well, obviously, that traditionally made graphics cards was their main market. And what they found out was that those graphics cards could be used. Their computational power could be used for AI really effectively. So generative AI. So OpenAI or Claude or any sort of generative AI can use these graphics cards really well to generate the answers.

Speaker 3:

And so NVIDIA suddenly found themselves found themselves in a position where that they were one of the forefront manufacturers of these cards and they've now pivoted their entire company direction. Obviously it's still doing traditional graphics cards, but now specifically trying to engineer and build out dedicated AI chipsets, for data warehousing. And it shows it's interesting just seeing that the what's happened to the share price, which has, you know, skyrocketed, completely. It's gone up, I think, you know, like 1,000 or 1,500 percent or something crazy in the last, 2 years, just based on the speculation of where it's going and all that all that, investments and and and that share value is all based around their data warehouse. Right?

Speaker 3:

That's exactly what it's going to be for, and they're going to invest all that money into data warehousing. So the CEO said earlier earlier this year that the existing $1,000,000,000,000 of data centers will likely double to 2,000,000,000,000 for 2030. So they're just building data centers, just 1,000 and 1,000,000 and 1,000,000,000 of data centers just packed full of the cards for James Brown, which is a really interesting indication sort of a market gauge on, you know, they're gauging demand for this. Right? They can see that, you know, their perspective is that this is just gonna grow, and they're just massively investing into it.

Speaker 3:

And when there's, you know, whenever you sort of see, you know, concerns around bubbles, people aren't investing into video based on that, like, their OpenAI, for instance, and they can generate, you know, actual AI generative answers. That's not what's actually they're being invested in. People are investing into it for the future version of where they can see AI even going, and actual foundations to support it. And essentially to that, Microsoft and OpenAI are planning to build, a data center estimated to cost more than 100,000,000,000 to begin running in 2028. So they want to try and actually reduce their alliance in video and build out their own data centers around this.

Speaker 3:

And so huge, huge sort of move there too, which is absolutely crazy. So if anyone's wondering if generative AI is a bubble, has 71 way to look at it and go, well, definitely the definitely lots of money being invested into it is not just, you know, it's not just sort of speculation at that point that's full on, you know, basing your entire company's forward momentum on massive amounts of 1,000,000,000 of dollars into it. So, yeah, that's very interesting to see that. Wow. Yeah.

Speaker 3:

That is really interesting. Yeah. We'll see we will see. And obviously, that's going to, you know, practical terms for just you and I, that just translates to way cheaper models, way cheaper interactions with AI at the moment. Like you still have to sort of consider, even though it's not absorberance, but depending on the number of users you have and how much you want to scale, if you're dealing with a 1000 users, sure, you know, you can put some guardrails, but that's fine.

Speaker 3:

But if you're trying to create something that you feel could be used by hundreds of thousands of users, those costs really do matter. And when you can start seeing those costs go down just based on, you know, better data centers, faster generation too, because at the moment when you ask a question and it takes, like, 5 seconds to get back to you and to start generating an answer, that's also computational power. Right? And that's shared among a data center. So you probably paste in a queue before it gets to your answer.

Speaker 3:

There's a lot of that latency is just bolt around the availability of that compute. And as these bigger data centers come up and they're more localized, like you would get some in South Africa, you would get some, you know, in the UK, they won't just all be, you know, in certain data centers around the world. There'd be more localized. You'll start getting, like, way quicker responses, especially with things like voice where it matters even more where you want that sort of instant return feedback. You don't wanna ask a question and wait, like, a minute for the for the response to come back to you in in voice.

Speaker 3:

So, yeah, just really, really good practical terms for us, yeah, using it and and sort of building our product space in that. Yeah.

Speaker 1:

Yeah. I I think I saw something recently where the the CEO was presenting where they're going with with the company. It's something to do with their architecture. I don't know if you saw any about anything around that, but it seemed like they were redesigning their architecture for their chipsets and everything to be to work in in a certain way and more efficiently. And just I'm curious as to how much farther are they ahead of AMD and some of these other competitors?

Speaker 1:

Are they even gonna be able to catch up? You know? I I don't know if they're gonna start, you know, being able to poach, you know, their their employees. AMD is able to poach, you know, NVIDIA's employees for absorbent amount of money to bring them over, same with Microsoft. I'm I'm curious about how that those those AI wars are gonna heat up between employ employees and and companies.

Speaker 3:

Also interesting where they'll start building those chipsets too, because at the moment, I think a large number of chipsets are based in the manufacturing is in Taiwan, which is a huge part of their economy. And it'll be interesting to see if if that's going to affect countries in general, just the sheer amounts of new chipsets that are required to this compute that they're all planning and investing 1,000,000,000 and 1,000,000,000 of dollars in creating where those actual chips going to get, built out to.

Speaker 1:

It's gotta be a gold rush

Speaker 3:

for time. Why? Ramifications. Yeah. Mhmm.

Speaker 1:

And China, I'm sure, is looking at, hey. Remember that agreement? You're not a country. Yeah.

Speaker 3:

Yeah. Exactly.

Speaker 1:

Yeah. It's going I was in Taiwan, 5, 4 4 years ago. And, I mean, it was amazing country over there to visit. It was it's a surprise. And so I'm curious as to see where it's going to I mean, you know, Taipei would it looked like I mean, it was a wonderful city.

Speaker 1:

It was huge, and and there's a lot going on, but I can only imagine in the next 5 to 10 years where that's gonna go if they're gonna be the center hub of chip technology. I think, I think this is a good place to to end this this episode of the podcast. Thanks thanks everybody for tuning in. As usual, like and subscribe, and, tune in next time, and we'll I'm sure there'll be some new interesting topics that we'll be discussing. Take care, everybody.

Speaker 1:

Awesome.

Speaker 2:

See you next

Speaker 1:

time.