This Day in AI Podcast

This week we gossip about OpenAI's shady web crawling habits, laugh at Zoom's lame excuses for spying, and dream up the perfect AI crypto scam. Get the inside scoop on Nvidia's new trillion-parameter instrument of AI, hear our hot takes on the public's growing AI disillusionment, and find out what an AI HVAC administrator would sound like. Join your favorite AI bros as they dive deep on the latest AI hype and hardware gossip - this episode is chock full of spicy AI tea you won't want to miss!

Please consider leaving a review to help us reach 100 reviews if you listen on Apple Podcasts :)

CHAPTERS:
====
00:00 - We Should Totally Do An AI Crypto Scam
00:27 - AI Meal Planner Suggests Chlorine Gas Recipe & AI with Personality
04:07 - OpenAI's GPTBot for Web Crawling
08:47 - Stealing content with AI, How to Protect Your IP from AI
14:37 - Zoom's Terms of Service for AI Training Scandal
25:25 - Nvidia's GH200 Announcement & Availability of Hardware
34:29 - Have We Reached AI Disillusionment?
52:54 - Generative AI LLMs for HVAC!?
55:31 - Claude Instant Version 1.2 Released
57:33 - AudioLDM 2: Text-to-audio/speech generation 
1:00:41 - Skeptics Vs Optimists for AI (AI Crypto Bros)

SOURCES
====
https://www.theguardian.com/world/2023/aug/10/pak-n-save-savey-meal-bot-ai-app-malfunction-recipes
https://saveymeal-bot.co.nz/ingredients
https://platform.openai.com/docs/gptbot
https://news.ycombinator.com/item?id=37030568
https://admiralcloudberg.medium.com/critical-conversations-the-crash-of-eastern-airlines-flight-212-660f47698887
https://news.ycombinator.com/item?id=37021160
https://www.axios.com/2023/08/09/zooms-terms-service-changes-ai-fears
https://www.nvidia.com/en-au/data-center/dgx-gh200/
https://twitter.com/authority_ai/status/1688619238389379073?s=46&t=uXHUN4Glah4CaV-g2czc6Q
https://twitter.com/SashaKaletsky/status/1676957007985922051
https://twitter.com/emollick/status/1688760539441217536/photo/1
https://twitter.com/anthropicai/status/1689303697535414272?s=46&t=uXHUN4Glah4CaV-g2czc6Q

What is This Day in AI Podcast?

This Day in AI Podcast is a podcast all about AI. It's an hour-long conversation on the influence and rise of AI in technology and society. Hosted by Michael and Chris Sharkey.

Chris Sharkey (00:00:00):
We should start a Ponzi scheme. That would be a great idea. We

Michael Sharkey (00:00:02):
Should totally do an AI crypto scam that would be like all of the buzzwords combined. , ai, crypto scam. Actually, you know what? Our horse racing agent could only take, uh, our crypto bro coins to use it. So you've gotta insert one crypto broco into the horse racing predictor to get your winning bet, which always loses. So Chris, there was really big news this week in AI outta New Zealand, a supermarket AI meal planner app suggested that, uh, to create chlorine gas . And this is a company in New Zealand, a a supermarket chain, I believe, called Pack and Save. And they created this savvy meal bot and it was all intended to help people deal in New Zealand with the cost of living pressure by allowing them to put in ingredients that were in their fridge and then generate a recipe based on the the result. And so this thing, uh, suggested things like poison bread sandwiches, ,

Chris Sharkey (00:01:11):
Poison bread sandwiches. Yum.

Michael Sharkey (00:01:14):
Yes. Serve chilled and enjoy the refreshing, refreshing fragrance. Uh, so yeah, there are all these ridiculous recipes.

Chris Sharkey (00:01:22):
Uh, I love it. It called the, the chlorine gas aromatic water mix. , the perfect non-alcoholic beverage to quench your thirst and refresh your senses. , it's sure gonna refresh your senses.

Michael Sharkey (00:01:36):
Yeah, so it seemed like to that they might put a stop to this, but I just actually went to their website. You can try it. Savvy meal bot.co nz, and I put in that I had in my fridge eggs, uh, coffee and juice, and it came up with excellent coffee, juice, ingredients, two eggs, one cup of coffee, half a cup of juice instructions in a blender, crack open the eggs and add them. Two, pour in the coffee and juice three blend until smooth and frothy. Four, serve chilled and enjoy.

Chris Sharkey (00:02:11):
I like this one. Bleach infused rice. Surprise .

Michael Sharkey (00:02:16):
What's interesting about it is they've stopped, they've got a, they had a box on the website where you could put in your ingredients, but they've just completely disabled the input. And now you have to select from popular items. But you can literally still make ridiculous recipes with it. ,

Chris Sharkey (00:02:31):
I mean, yeah, obviously it's people screwing around with it, putting normal ingredients and adding bleach to the mix, but they obviously didn't think it through in terms of telling it, Hey, consider whether this is actually a food or not.

Michael Sharkey (00:02:43):
Yeah, I think with a little bit of customization they could have, uh, stopped this, but hey, it's drawn worldwide attention to save you the Meal bot. So perhaps one of the best marketing campaigns yet .

Chris Sharkey (00:02:55):
Yeah. And I think prompt injections of those that sort of trivial. Now people are aware of that and would immediately, it's not like this company's setting out to kill people. You'd be quite aware, okay, it's just some kids having some fun like OT style. It's not really malicious and it's not really dangerous. Like no one's actually gonna do that. I would hope. This

Michael Sharkey (00:03:16):
Is what I've been thinking about, like no one talks really about bingeing anymore, apart from seeing previewed some of the new features like the, uh, vision to, to explanation that was released through bingeing and it is still available. You can upload an image and it'll describe it using G P T four vision. Hmm. Because it doesn't have the Sydney persona anymore. I really think, you know, that was so good for pr, like these companies shouldn't avoid this, uh, idea of putting personas on AI because it, it just makes them more interesting and yeah,

Chris Sharkey (00:03:49):
We say it every week, you gotta have some fun with it. It's a lot more entertaining if you regularly using AI stuff to, to have a bit of fun and a bit of personality and a bit of silliness there. If it's just, you know, stoically, you know, it's sticking to its task and being serious and solemn all the time, where's the fun in that?

Michael Sharkey (00:04:07):
So the company this week with the least personality open AI , uh, announced G P T bot and it's open, uh, G P T bot is open AI's web crawler. Uh, this is their official web crawler. So everyone was talking about this, that we needed a web crawler style crawl bot for ai. So if for those unfamiliar with search engines, you can have a file called Robots Text on your website, and you can tell it whether you, uh, allow the search engine to crawl your website or not, or disallow certain pages that you don't want to be crawled. Now what's interesting about this is I've always seen that as a, a relatively fair exchange between the search engine and a website where you can allow them to crawl the website. You show up in their search results, you get traffic and can make money or monetize that traffic. So it seems like a really good two-way deal between the search engine and the website. The problem I think, with this G P T bot is you can basically say OpenAI can or cannot crawl my website, but where is the incentive here?

Chris Sharkey (00:05:16):
Yeah. I, I never thought about it in terms like you just described, but you're right. It's, it's a really, it's a one-way transaction. They get all the benefits of it on an ongoing basis and you get nothing.

Michael Sharkey (00:05:27):
Yeah. Like I know you can just train on my entire website, which they did already anyway, let's be honest, like Stack overflow content and what, what do I actually get out of it? And even if like, I know bingeing and others have sources and, and Google's thing that everyone's forgotten about and no one caress about does it too.

Chris Sharkey (00:05:47):
Yeah. And I guess there's examples where, uh, you know, like for our product, right? We're a marketing software. If someone's like, what are the best marketing softwares and it lists yours in the list, then great, you probably do want that. So there would be cases where it would be favourable to be in there, I suppose.

Michael Sharkey (00:06:03):
Yeah, you can totally see that being favourable, but is it enough to justify allowing them to access, you know, if you're like a news site or a blog or any of the sources of content that actually might have value?

Chris Sharkey (00:06:18):
Oh, the people making the grass fed human created content, right? Like the, that we've spoken about. Like if you are making unique content that takes you time and money to create, that's when you really probably don't want it in there. Do you

Michael Sharkey (00:06:30):
Think they're just doing this to not get sued?

Chris Sharkey (00:06:33):
I actually think it's, I'm even more cynical than you. I think it's actually, so they're saying that everyone who doesn't now have this file is tacitly agreeing that it's okay to train on their data.

Michael Sharkey (00:06:44):
Yeah. And maybe that's, so it's like, oh,

Chris Sharkey (00:06:46):
Well you, you could have stopped us if you just put this file there and knew about this. And like, they just, they just have the hubris to think everybody in the world knows who they are and we'll just comply with what they ask. So I reckon that's

Michael Sharkey (00:06:57):
I funny that like that now this, this will become like the standard, uh, I think to block crawl access, but it's not gonna stop anyone training another model in terms of competition to just go and crawl everything on the open internet anyway. Like

Chris Sharkey (00:07:12):
Yeah, just like robots dot text. Like it's, it's an honour system and I think it constrains the big guys because they're the ones everyone's looking at, and they're the ones where people will say, well, I had this file on this date, you've been trained and, and now it's, it's come through. So I, I think that it'll constrain them, but it doesn't stop your content from being out there to train other models, that's for sure.

Michael Sharkey (00:07:34):
Yeah. It'll be interesting to see if other companies like Anthropic adopt this same capability. But you could argue it's sort of unfair to newer models in the sense that like, OpenAI and Andro got to train on everything, and now they're like, oh, now you can block us .

Chris Sharkey (00:07:51):
It's like the third pressing of olive oil. You get all the crap that's left over at the end.

Michael Sharkey (00:07:56):
Yeah. It doesn't, doesn't make a tonne of sense. I think the, the web's reaction, at least from the comments I've read around this announcement is basically, you know, how is this, how is this like helpful now given they've already crawled everything sim similar to what we're saying. Uh, it'll, it'll be interesting to see, does everyone just go and quickly add this to their website? Or like you said, are there some marketing style websites that it, it's actually helpful to have OpenAI crawling and and sourcing you.

Chris Sharkey (00:08:26):
Yeah. Yeah. I think optimising to get your content in the model will be something that people look to do. If there's an advantage for you to have that there.

Michael Sharkey (00:08:35):
So there you have it. If you're a website owner, go and, uh, allow it to crawl and maybe some prompt injection to get it to recommend your, your product.

Chris Sharkey (00:08:45):
Yeah, yeah, exactly. , exactly right.

Michael Sharkey (00:08:47):
So speaking of stealing content like OpenAI do across the web, , you found a really interesting this week where this was occurring with a writer that you follow writing long posts on cra air crash investigations, and of course on YouTube,

Chris Sharkey (00:09:06):
My wife's biggest fear and my secret indulgence is reading every week .

Michael Sharkey (00:09:11):
Um, so basically what's happening or what's alleged is happening, and I think everyone's fully aware that this does happen, is on YouTube, these guys are going and crawling content in articles, creating AI voiceovers, creating a series of AI

Chris Sharkey (00:09:28):
Faces as

Michael Sharkey (00:09:29):
Well, faces, images, everything to tell a narrative of someone else's story, produce videos, get a lot of, uh, views, and then obviously be able to monetize that content.

Chris Sharkey (00:09:40):
And there's a lot of research that goes into these articles. Like, this isn't just content like stealing it, the, an article about the latest Nvidia chip or something. It's like these are in depth, uh, research that would take a long time to do.

Michael Sharkey (00:09:53):
Yeah, there's a tonne of analysis here and it, the, just the call out section that's most interesting is, hello, if you're hearing this being read by an AI voiceover on YouTube, you're watching stolen content. This article was written by Keira Dempsey, a k a Admiral Cloud Berg on August 5th, 2023, and no permission was given for reproduction. This message is a test to see if the thief is actually reading the articles before stealing them . It's a wonder good

Chris Sharkey (00:10:20):
Tactic. I wonder if, we'll now, we'll now show up as stolen content because you've just read that

Michael Sharkey (00:10:24):
. Oh yeah, he'll think it's us.

Chris Sharkey (00:10:27):
Hello. If you're watching, I love your articles. Yeah,

Michael Sharkey (00:10:30):
I'm

Chris Sharkey (00:10:31):
Not stealing

Michael Sharkey (00:10:31):
Them. What, like, this is interesting though. People taking others, I, I mean it's no different to taking any other content really and, and cloning it, it's just that it's much faster and easier to, to steal people's content.

Chris Sharkey (00:10:43):
Yeah. This is more about the rise of like really good text of voice than it is about ai, large language models necessarily facilitating the stealing. The difference is what you could do, and this is something that would, would really trip up this detection method, is just use something like clawed a hundred K to rewrite the content in unique language and then, you know, you sort of say, well, this isn't your content, this is something else.

Michael Sharkey (00:11:10):
Yeah. Whether like they need to

Chris Sharkey (00:11:12):
Be better at stealing, essentially

Michael Sharkey (00:11:14):
All of these use cases are just people trying to figure out quick ways to make money at scale by automating ai, you know, crawling blows like this. If you could automate that process where it's posting a video a day on aircraft inve investigations, crawling other people's content, slightly reworking them and making them interesting enough, there's clearly value in doing it, which is why they're, they're doing it otherwise, yeah. You would quickly,

Chris Sharkey (00:11:38):
You would quickly build a following. And then it really comes down to that question is by publishing an article and putting it out there, you're not really conferring ownership of of what's written there. But also, if I can perfectly remember all of the stuff in your article and summarise it, do I not own the own output of what I've now written based on what I learned?

Michael Sharkey (00:12:00):
It's very confusing. It's almost like copyright laws are going to need updating or I don't, I don't even know how you would tackle this, but people are just not gonna put their work out there anymore for fear of it being potentially just turned into YouTube videos for other people's gain. Like that could be one way to look at it, or another way could be.

Chris Sharkey (00:12:23):
Yeah, you've got on, you've got on one hand the cheaper, really, really cheap ability to produce content now on one hand. So you can make realistic looking articles about, say, fictitious plane crashes just fine and get readership from that. And, and you've got this guy on the other hand who's doing months of research and making, he makes his own diagrams and like all this sort of stuff. And then you think, well, I can produce that. Like I really need to protect that. I can't just give that out. But if you don't give it out, you're not gonna get the audience.

Michael Sharkey (00:12:54):
It's hard. Yeah. It really devalues content production or the content production that trains the models or, or makes AI more useful. So I don't know, like if you were this guy, what would you do? Would you gate your content? Would you

Chris Sharkey (00:13:08):
I actually dunno the answer to that question. That's why I find it so interesting. Like, I think maybe trying to sabotage it like he's doing there is not a, not a bad way, but I don't think that'll stop it.

Michael Sharkey (00:13:19):
Yeah, it's definitely not gonna stop it. I, I would feel really upset by it, but it's no different to in the past when, you know, we've had like website designs or our product just completely cloned . Yeah. Like the UI just completely and utterly cloned. It just takes longer. This is just much more rapid copying.

Chris Sharkey (00:13:38):
Yeah. I think from a sort of life perspective, I've always tried to just keep low stress, like those, you know, those things are gonna happen and if you get too worked up by it, you're just gonna live a life of, of great stress. But that's, that's, you know, an easy way to save. If this is your livelihood, um, then, you know, you might feel differently about that.

Michael Sharkey (00:13:57):
I think one other technique would be to create the videos himself. So like get an AI tool that can create these videos Yeah. And actually just produce the work himself and then, you know, if others are stealing it, it's like, well they actually just copied my entire video then

Chris Sharkey (00:14:11):
Well, and then you can do a YouTube takedown and those kind of things as well. It probably is a, a good strategy if, if you're able to do it. We don't know if he has the skills or time or or inclination to do that though.

Michael Sharkey (00:14:23):
Yeah. So you're

Chris Sharkey (00:14:25):
Probably right. That would be the best defensive strategy is just do it yourself and produce it better.

Michael Sharkey (00:14:30):
Yeah. I think that's the only thing I can think of where you can put a stop to it is like, if you can't beat them, join them is probably the the only way there. Uh, okay, so Zoom this week caused a bit of a, a stir by updating their, their terms of service and someone pointed out in the terms of service, it would allow Zoom to train their AI potentially on content generated in meetings. And I, my

Chris Sharkey (00:14:56):
God, and imagine how much you would have from that, like people's faces, the audio, social interactions, there's a lot of conversation that goes on on Zoom that isn't necessarily just a, a work meeting. Like there's all the, the small talk before it, uh, you know, personal calls that that data would be incredibly, incredibly valuable to train voice, video and social models.

Michael Sharkey (00:15:21):
It would have so much data on conversations about, is your mic working? Can you hear me? Uh, , what's the weather like where you are today? Yeah. What time is it there? , you've

Chris Sharkey (00:15:31):
Accidentally left your pants off, , , that kind of stuff.

Michael Sharkey (00:15:36):
The fine tuned model. Uh, but what, what I find so funny is, so there was all this outrage about Zoom's terms and conditions. People were getting really upset, obvious for obvious reasons that Zoom might be training AI based on all of this information they share in meetings, which in a lot of cases would be very proprietary.

Chris Sharkey (00:15:54):
I could imagine. Yeah. A large amount of it would be. Yeah.

Michael Sharkey (00:15:58):
And it really plays into this whole theme with open AI and, and the content being stolen that you know, who owns what when it comes to training these AI models. And, and the controversy was so great that Zoom's, c e o had to come out and say that, uh, it says, uh, he reiterated Hashim, who's the product, like they had a product at Zoom's comment that Zoom would gain user consent for any AI training in a LinkedIn post on Tuesday. Adding that March terms of service changes were a mistake. We had a process failure internally that we will fix

Chris Sharkey (00:16:33):
bullshit. That is bullshit as if that is, that is like the, uh, we've, we've screwed up here. We've upset everyone. Let's quickly make up an excuse

Michael Sharkey (00:16:43):
For ai. We do not use audio, video or chat content for training our models without customer consent.

Chris Sharkey (00:16:49):
Yes. Yet you know, like we haven't trained it yet, but we're building up a vast treasure trove of data, um, that we can and will use.

Michael Sharkey (00:16:59):
Yeah. Some it says some still are worried the terms of service granted overly broad permissions while others question how the consent would work and whether one person could grant consent for all of the meeting participants. This is an absolute mess and it's going to become an increasing problem of who owns what data. And again, probably back to copyright laws.

Chris Sharkey (00:17:20):
Yeah. I mean, I dunno how copyright applies to like what you say on a call, like if you're producing IP when you speak, but certainly, um, people would feel ownership over their own thoughts and face and likeness and voice. Like that's pretty, you know, that's kind of you in a way. Like you wouldn't necessarily want someone who can just clone me, um, without my consent. Definitely not.

Michael Sharkey (00:17:43):
The, the hard part about it

Chris Sharkey (00:17:45):
If they're listening is why not, you know, how you can do the free Zoom calls for up to 40 minutes or 20 minutes or whatever it is and you don't need to pay, they should just say those, you can do it for double the length as long as you consent to having all your stuff, uh, used for ai.

Michael Sharkey (00:18:00):
Yeah, I'm, I'm sure the way they're looking at it is building, you know, useful tools and training models based on, you know, things that happen in their product to then use that to just build, like make more useful tools for their users. That's probably how it started. I don't think it's like let's train super a g i off Zoom meetings, but

Chris Sharkey (00:18:21):
Yeah, I agree. Like, I mean they must be looking at the AI revolution, looking at the data they have and saying, what can we do with this? It's natural. I mean you wouldn't anyone would try it in their situation. It's just they've obviously tried to do it in a clandestine way instead of just being up and upfront with everyone and say, Hey, we wanna experiment with this, do you want to be part of it kind of thing. I

Michael Sharkey (00:18:42):
Just find it funny companies like Zoom and then even OpenAI if you pay for chat g p t plus it used to be the saying that if you weren't paying, you were the product. Yeah. But for a lot of these products we are paying and we are still the product.

Chris Sharkey (00:18:59):
Yeah, that's a good point. I mean, Zoom's known for this though. Like they, it came out years ago that they were actually monitoring all of the processes on your computer, um, sending back data, all this sort of data about, they asked for all these overreaching permissions when you installed the app and it would just dump all the data about your entire computer usage back to them. So they're known for this and they've been doing it for years. People like decompile the programmes and looked at them and things like that. So I think this is a habit for them. It's not, this isn't an accident. And to say it was, is a joke. And my guess is they were probably already collecting the data and then just deciding if they could sort of update the terms and conditions and then start using it for their training.

Michael Sharkey (00:19:40):
It really shows to the value of these sources of data, right? Like Zoom's data of all meetings. It's not really their data, it's, it's the user's data in my opinion. But the way they would see it is it's their data and they would, they can train this unique data set and this unique model on meetings that have occurred all around the world and continue to occur.

Chris Sharkey (00:20:01):
I mean, who else has it? Google would from Google Meet, but they're probably the only other ones, maybe Facebook from voice calls, but it's a totally different kind of nature of call.

Michael Sharkey (00:20:10):
Google would probably have it, but they couldn't ship a product that that uses

Chris Sharkey (00:20:15):
Or that anyone would trust or rely upon. Yeah, I agree.

Michael Sharkey (00:20:18):
Yeah, I don't even think they're a threat anymore because they just can't deliver anything they say. Uh, you know, they just, they, and no

Chris Sharkey (00:20:25):
One trusts 'em if they do. I mean, do you ever hear anyone talking about Palm? Like maybe incidentally or as a comparison, it's never talked about in real production or anything serious.

Michael Sharkey (00:20:37):
I'd love to know if any listeners, 'cause I know we have a lot of technical listeners, are any listeners actually using Palm? Like seriously it, it, I don't know anyone that e e even talks about it mentions it, uh,

Chris Sharkey (00:20:50):
I mean I've heard it, I've seen it mentioned in various times. Like I think it's a competent model. I don't think there's anything bad about it, it's just that do you trust Google in the long run to support it? And I don't,

Michael Sharkey (00:21:02):
I think the only way they'll state their claim in this market is to acquire some successful entrants is is probably the only way they'll, they'll get in. Uh, similar to how, you know, they, they've acquired other key category businesses now for years since doing search. They haven't really invented much outside of Gmail.

Chris Sharkey (00:21:22):
Yeah, that's right. Like a lot of their successful things like Google Analytics came from, what was that programme called again? Um, we used to use it, but yeah, I know what you mean.

Michael Sharkey (00:21:31):
So the, yeah, this, I think this theme of ownership and just this feeling now of like, I, I feel really uncomfortable even using chat G B T that I can't use certain features without saying like putting it on that private mode so you lose chat history and like various other benefits of the product, knowing full well that these guys are gonna train on my proprietary data or the data I'm putting into the system. I, it really upsets me and unsettles me that I'm paying for this product and I can't have privacy when using that product. I know they're going to refine their, their model on it and I understand why they're doing it, but I think this is going to be a challenge moving forward for consumers and people buying AI products out there or just using software in general is like, are they using my data and how do I feel about them training on my data and then making potentially infinite money off that model? Yeah.

Chris Sharkey (00:22:29):
Well, I mean we, in our own business, I've noticed in the security questionnaires and things we get from our enterprise level customers, there's explicit questions in every single one of them now about how you use AI models, how they're trained and how our data will be used in them. So I think at an enterprise level it's going to be really serious as to how the, the data is kept for training and that that can actually be verified because companies with, you know, SS o c certifications and I s O certifications, they absolutely mustn't permit their data to be run through these models knowing that they will be used for subsequent training. So it's a perfectly legitimate question. And the reality is if you're using one of the big players, you can't really, I mean you sort of can guarantee it now, but it's still transmitting that data. It's still not something I can see big companies agreeing to. So it is a serious issue and the fact that even when it's paid, you can't be certain is, is not something that's sustainable. If this is going to be ubiquitous,

Michael Sharkey (00:23:32):
There might be a business opportunity here as well, which I think players like OpenAI have overlooked, which is just chat G B T enterprise where it's a hundred percent secure and maybe they're already working on this and has other features that would benefit the enterprise. I know we talked about the Bing Chat enterprise where they, you could do analysis and do things like that, that Microsoft put out there, but I, I think the reality is most of these enterprises now are catching on and banning chat G B T usage. But it's almost like a sad thing because for 20 bucks a month or however much it is, the productivity gain in most white collar roles would be profound. But I, I wouldn't feel comfortable today saying to all of my team, like, go and use chat BT and put code snippets and all sorts of things in it because you just know that it, it could leak in a, in a future chat or future conversation someone else is having with that ar

Chris Sharkey (00:24:29):
Yeah. The other answer to it is obviously things like Amazon Bedrock and the other, the other click to deploy things where you can go in now even in Amazon and click to deploy LAMA two for example, which is running on your own private server, I mean with within Amazon, but everyone runs in there. It's not like it's, it's a, an additional risk there. Um, and then you can run something for your own organisation. But the thing against that is like if we wanted to do that, say for our own organisation, you'd have to figure out how to get in there and launch it, you know, customise it to yourself if you want. Then how do you provide access to it? What sort of interface does it have on top of it? There's a lot of work there. Like it, it's, it sounds simple, the all these things sound simple and they're theoretically possible, but when you actually go to do them, there's quite a lot of work on top of that. And it, I'm sure it will be commercialised into products where someone will do all that part of it for you. But right now I think that it's not that available.

Michael Sharkey (00:25:26):
So this week we had a, a pretty major announcement from Nvidia on the hardware side of things. They announced a brand new, uh, let me bring it up here on the screen. Nvidia, D G x G h 200 . If our confusion around a one hundreds and H one hundreds was not already terrible, this is gonna get much worse. So theh

Chris Sharkey (00:25:50):
Yeah, my understanding is they've, they've sort of upgraded the hardware that they hadn't even released yet, so they hadn't even released it and they've announced a better version of it.

Michael Sharkey (00:25:57):
Yeah, I really like how they explain it, the trillion parameter instrument of ai. It's got a, I like the instrument of ai. Yeah, that's a good word. It's a nice, nice work. Can you explain to listeners what, what this means, why they've done it?

Chris Sharkey (00:26:10):
Not really. I mean I can try, but I tried reading through

Michael Sharkey (00:26:15):
It World class analysis. Yeah,

Chris Sharkey (00:26:16):
That's right. Come to us for your expert analysis. Basically, it's a a lot more memory. Like, so the, one of the things about running models, especially on the inference side, is you need to have the whole thing in memory for it to be able to run. This is the reason why you see the different parameter size versions of things like LAMA two, there's like 7 billion parameters you can run on your MacBook Pro. There's 13 billion. You need a fairly decent, uh, graphics card, a thir a 40, 90 or 30 90 to run it on. And then you've got the 70 billion, which you need an A 100 and above in order to run. So presumably we're going to get much, much bigger models with many more millions or trillions as they say, parameters. Now for those to then subsequently run for inference, they're going to need larger graphics cut.

(00:27:06):
There is of course ways to, to combine memory. So for example, with LAMA two, one of the things you can do is combine your system memory with your G P U memory as additional memory. So they've got, sorry, I just said memory like 50 times, but you know what I mean, you can fuse the memory together and that helps, but it's still not as good as one G P U holding the whole thing in memory. And obviously there's multi G P U systems and there's ways to link those up, but then there's communication time. So networking, um, is one issue. They use this infinity band thing, which is, I don't really understand, but it's a way they communicate. But these new chips are going to use ethernet instead and they're going to have a really, really fast connection between them. So what it, what the upshot is, is it just means much, much larger models will able be able to be run and faster.

Michael Sharkey (00:27:55):
And when you say inference on these cards for those that you know aren't following all the sort of AI terms, that just is like when you ask a question to chat G P t, it's running

Chris Sharkey (00:28:05):
Inference. That's right. The inference is when you use the model that is pre-trained, so you load all the model weights into memory, which is what all the memory's for, and then when you ask a question, it runs it through that, that neural net of which is held in memory. And so, and

Michael Sharkey (00:28:20):
So in theory, Nvidia, uh, NVIDIA's G H 200, which is the, the new, uh, card that was released has the same G P U as a H 100, but it sounds like it just has a tonne more memory as well. Yeah,

Chris Sharkey (00:28:35):
That's right. The one I read about has 96 gig of HBM three, whatever the hell that means. I asked the AI earlier, I wonder if it answered me, but, um, it's uh, like really a high bandwidth memory, a new generation of high performance dram. So assuming my, uh, AI girlfriend I'm chatting with is correct, that's what that is. And then it's got 512 gig of normal DDR R five ram. So that's huge. You know, that's like 600 gig of ram and a lot of the biggest cards now are only pushing a hundred gigs. So it's like six times larger from that perspective faster and it has a better way of them working together. So Nvidia themselves are putting together four systems that have 1024 of these chips in them all working together. So you can just think of the power of that, like as a massive supercomputer, but on an individual level, being able to access these cards, like if you had one of these right now, you could run anything. There is probably multiple versions of them on the one card, uh, right now. And I think this foreshadows much bigger models coming. And also just the fact that hopefully they'll make enough of them that, that that there's widespread use of available for it. So

Michael Sharkey (00:29:50):
This means in theory, I could run, if I put one of these cards in my computer LAMA to 70 billion parameter on my local box run inference really quickly and use that model.

Chris Sharkey (00:30:04):
Yeah. And you could probably run something that's even bigger than that, like four or five times bigger than that when it comes out.

Michael Sharkey (00:30:10):
How long do you think until everyone will just have these in every device? Is that where it's trending?

Chris Sharkey (00:30:16):
I think so. I think it has to go in that direction because as people wanna run local models and other things, it's just not appropriate for them to all sit in massive data centres where you've gotta pay an hourly rate to access them.

Michael Sharkey (00:30:28):
Yeah, it'll be exciting to see, like once we have these powerful models in every device like an Apple watch and you know, various devices just running locally, the the question though will be do they just become a lot more efficient? So it just doesn't require this like infinite scale up. Like there's gotta be a point where efficiency

Chris Sharkey (00:30:48):
Can win. Well remember it's not just scaling the speed, it's scaling the memory, the amount of parameters. One of the reasons why when you run LAMA 7 billion on your own computer, it isn't as good, is it simply just doesn't have, uh, as many pathways to go down and as much knowledge contained within it. And a lot of what people use the model for are that knowledge that's contained within it. Like all our discussion before about stealing content becomes less relevant if the content isn't there for it to reference. So I think that, uh, the, the memory thing is the real reason why, I mean, efficiency's part of it, of course, but, and we've seen that they can make smaller models way more efficient. But I think just if we're talking about getting towards general intelligent emergent behaviours and uh, really just stuff that blows your mind, that's where all the memory is needed.

Michael Sharkey (00:31:38):
So we have talked over the past few weeks about the inavailability of not only the hardware, but also if you wanna spin up, uh, this hardware on cloud service, it's not even available. You mentioned that last week as well. Do you think these are the biggest problems right now for the development or, or the advancement of AI and applications? Is, is this the main problem? Yeah, I do because

Chris Sharkey (00:32:02):
If you look at it like it's all very well to run 7 billion, 13 billion parameters locally, but you're not working with the state of the art if you're doing that. And the alternative is to essentially either spend 20, $30,000 and buy yourself one of the, the really high-end cards that's capable of doing this and working with the latest stuff, or you are renting these clusters and renting servers on clusters to be able to do it. And I'm not even talking about training your own models, I'm talking about just inference. Like if you just wanted to play with llama 70 billion, yeah, you can try it on hugging face. Yeah, you can try it on other things, but if you wanna experiment with it, that's not really a sustainable way to work with the models. So then you look at, you know, I was looking around this morning on Reddit at different places people say you can rent GPUs and you know, there's sites where you can rent it off someone's home computer for say a 30 90 or whatever.

(00:32:54):
Then there's the, the LAMA Labs one, which I've spoken about a couple of times, which currently again the same as last week, has no hardware available. There's nothing on there. I found another one this morning that's called paper something. Um, and it's running on, um, digital ocean. Um, and they have availability, but it's really quite scarce for something that is, is really, as you pointed out to me this morning, where did all the cards go? You know, like there just isn't that much casual availability then you think, okay, I'll get onto Amazon. But Amazon's high-end machines that have the GPUs in them come in configurations where it's like six cards, eight cards. So you're talking something like $40 an hour or $30 an hour to run a machine that has the power to use, say llama 70 billion. So if you just wanted to play around with it and learn from it and see how you could apply it yourself with the real thing, not the baby version that you run on your own computer.

(00:33:51):
It's a very, very expensive undertaking and very hard to access. And you know, when you, you say, okay, well I'll tear the machine down when I'm done, but then it's like, okay, now I've gotta get it back to where it was. And you kind of need to be, have some DevOps skills to do that. It's just, it's just tricky to work with. And so I think that yeah, availability of GPUs is, is certainly constraining some people. I can't speak for everyone, but I I it's definitely something that means that we're not working at the level that we could be.

Michael Sharkey (00:34:24):
I might sound like an idiot for saying this . Hmm.

(00:34:29):
But do you ever get the feeling right now all of the news is around the tools, the hardware, the lack of available hardware to train models, but day to day really, what, what has changed in our world so far with generative ai, large language models, like all of these innovations that have come through OpenAI initially releasing chat, G B T, which really a awakened everyone to theirs. Like really it's like all we hear about is hardware availability, improve models, but are we really seeing any of the benefit at the consumer level or even a business level yet? It, it feels like everyone's still mucking around with tools and trying things and it's a, a series of party tricks. And yeah, I'm not saying that I don't believe that that will change in the future. I think there's a lot of people out there building on these and figuring out ways to insert them, you know, in, into different aspects of our lives.

(00:35:24):
But the main call out I think that is interesting about this stuff or, or probably the most positive right now, is just automating aspects of white collar work. That, that seems to me that that's probably where a lot of this will go initially is, you know, like basic accounting stuff where it's like you have a question that you would normally ask an accountant, we talked about doctors last week, it will just take away a, a lot of like automate some of the aspects of, of white collar work that that seems like an an obvious thing that would be disruptive. But outside of that, I just wonder like we've got all of this, this hardware and all of these models being trained, like what, where, where is this leading? What are we gonna see as a result of this?

Chris Sharkey (00:36:09):
Yeah, I'm definitely seeing in the things I read and increasing scepticism around the, the, the where the large language models are taking us. People are very quick. So we saw last week meta, G P t, we've seen auto G P T, things like this. And you know, when I read say the comments on Hacker News and Reddit, people who were previously there was say 50 50 split between the sceptics and the, the people who are really excited and positive about it. It seems like the sceptics are winning now. It's really, uh, this sort of anaemic response to it. It's like, yeah, but it it's not really that good. Yeah. But really it's just this and um, there really just, uh, is a dying enthusiasm for it as, as something that's revolutionary, like you say. And I kind of agree, I think that while the potential's there and we all talk about the potential, we're always talking in idealistic terms of it running at its absolute best in a, in a pre-configured scenario that's perfect for the problem we're talking about. But in practise it's really just a chat bot in some cases, you know, like it's not, uh, it's not actually turning into these things yet.

Michael Sharkey (00:37:19):
Yeah. The meta G p t criticism wa was an interesting one. We covered it last week and I think it's always important to clarify a timeframes when we talk about these things. Like I, I'm definitely talking about the current timeframe we're in like the, the maybe 12 to 24 months out where these systems are starting to show. And I thought the meta G p T last week, for those that didn't listen to the episode, it it was a series of people on a software engineering team that they simulated and they all had a role to play in tasks and work together to build things like a, a blog management system. And some people even in the comments were critical of that saying, you know, it's not that impressive, uh, or, or that useful. We saw that with auto G P T as well where that's where it just kind of goes off and you give it a goal and it just goes and tries to figure it out. There's not really that many interesting use cases of it, it's just interesting as a party trick right now. But I do think it shows where things can go or get to with improved models or whatever new technologies evolve and what we should expect in the future. The, the part I struggle with a little bit now though is, you know, what can you really push in terms of the capabilities of what we have today to get output that is potentially, you know, very disruptive to, to the world we live in.

Chris Sharkey (00:38:44):
Yeah, exactly. And I think those, the thing about those cases that you spoke about is they are very domain constrained, particularly the meta G P t, it worked in that example, but it was picking an example where it knows the most. I mean, code generation has been the first thing that the models did really, or the first thing that everyone experienced and they, they showed off. So that part we know it's decent at. And then in terms of playing the different roles, it's like, well were they really different roles or was it just the model taking different aspects of the same problem? I mean, it is the same model just given a different persona. So is it the same model just doing different tasks essentially? And that was one of the criticisms that really stood out to me and actually made me think was we've spoken a lot about the idea of, well, the future of AI is not gonna be one, it's gonna be thousands or millions of them all working together. But the criticism is, well if it's the same model just with a different aspect or a different personality underlying, it's still being trained on the same data, it's still the same weight, is it really that different? Or is it just like having a million college students all working together on one problem? Is it actually more intelligent or is it just pretending to be different roles? And I actually think that's valid criticism, which I don't know what I think about. Well,

Michael Sharkey (00:40:04):
From my experience toying around with it, I don't necessarily, I think if you're dumb enough to just put the same model with a different persona together and expect to get some novel outcome is stupid. What I think it's really good at is focusing the model at a particular point on a specific skillset. So it's saying, you know, you are the product manager, your job is the following and con confining how that that model, uh, is working at that particular point in a project to, to have, um, you know, like context. You might have some data stored in that particular model or that model fine tuned on a particular skillset or, or source of knowledge. And at that step when it's calling on that agent, it's got the sort of refined, um, model or fine tuned model, and then it's got the persona that's looking at the solving the problem from that particular angle.

(00:41:04):
So I just think if you're just gonna load up G P T four with like a persona and put them all together, like maybe that helps focus the model at each stage on a particular problem. But I still come back to you're going to have to have these specialist trained or fine tune models available for each agent in that mix. And that's where you, you would see the benefit. Um, I mean maybe we should just put this to the test instead of theorising about, it just doesn't seem like anyone's done it yet. Yeah, I think

Chris Sharkey (00:41:35):
We've definitely seen that thought guidance can give much, much better outcomes and that's enough to, to, for me not to dismiss the idea out of hand. I still do think that's the future of it. I still think, like you say, specialist models in particular areas of expertise all working together is where we're going to get to. And while right now, okay, I sort of do understand the criticism. It's not enough for me to say we're not going there. I guess just back to your original point, it's just that right now, you know, today we're not seeing the benefits of that kind of thing yet. They're just these, as they say, they're toy models that get a lot of GitHub stars, but where's the actual meat? Where's the actual thing that's, that's helping society or business or whatever it's designed for? I

Michael Sharkey (00:42:20):
Just think about my day-to-day life, right? Like how am I using AI in my day-to-day life now today? And it's still, I'm just still using chat G P t, uh, to get like, you know, really it's just an evolved version of Google, uh, that's helping, you know, I'm asking it questions, I'm helping it to think through, I'm using it for code examples. I'm, I'm using it for learning a tonne. Like I couldn't live without it. I couldn't go back now. And then I'm using it to literally read my kid Batman stories in the evening and he just takes it for granted now and thinks that he can dream up any plot points and he can have this perfectly trained story. Admittedly, I use Claude for that because it's much superior at writing. In fact, it blows my mind now I'm starting to not fine tune it, but tune how I ask it, like my prompts to get a a great story. And I read them now and I'm even like excited about the plot and they always have twists, which I honestly, it, it does really good twists. So, so that's like my day to day, right? And then, you know, in my car I have a Tesla, we, we in Australia don't have the full self-driving beta, but I I definitely use

Chris Sharkey (00:43:32):
It more. You just let chat g p t drive the car. .

Michael Sharkey (00:43:34):
Yeah, I've wired chat G P T and, but I'm u I use auto pard all the time and every software release that seems to just get better and better and smoother and smoother. So I guess that's an application of it, but outside of that, I just can't help as an observer look at the explosion of all this hardware and think like, what else are we getting here guys? Like,

Chris Sharkey (00:43:56):
I mean it's probably like any new technology. There's the, the initial excitement and all the possibilities, but the actual rise of it is sort of in the background and more gradual. I wonder if we'll see it in, well, we will see it increasingly in more products. It'll get to the iPhones at some point and then suddenly it's just part of life and it's everywhere rather than it being like today and then tomorrow we've got it in everything. It'll sort of creep up on us.

Michael Sharkey (00:44:21):
Yeah. I still see it as like we've got this new tool, it's like a shovel. We're now figuring out really great ways of making shovels and refining the design of the shovel. And then one day the shovel will just be everywhere. It's like, it's just a shovel who cares?

Chris Sharkey (00:44:35):
And it makes you wonder if meanwhile the, the hardware vendors are just absolutely munching up the money, uh, providing everyone with the things to make their tools.

Michael Sharkey (00:44:45):
Yeah. And and the it it's very reminiscent. Like I've said many times with the early days of the internet, it's like, how will this impact the world? And it took years and years and years to change the world where now everything we do is online banking, shopping. Like, it, it really is,

Chris Sharkey (00:45:03):
Well, I mean I don't know about you, but most like , maybe I go to low quality restaurants, but everywhere I go now it's like, scan the QR code you've gotta order on our app. Like I the human will no longer take your order. You've gotta do it on this machine.

Michael Sharkey (00:45:16):
Yeah. So I I think it's, it's going to be weaving its way into everything. But back to the scepticism, I do agree with you. I think there has been some sort of like negative term where everyone's like, oh yeah, here we go again with announcements. Maybe it's fatigue or maybe it's now we just take these wild technologies for granted.

Chris Sharkey (00:45:37):
Yeah. And I, I see that every time someone wants to criticise they point to hallucinations. That's the big one. They're all like, oh yeah, it may have the intelligence of a college student, but a college student who just actually just makes up garbage and look it's fair, but do you want it to work or don't you like, I think in some cases the sceptics are sort of the, the ones who were under threat from it and they're grateful to latch onto anything they can that's negative about it to say, oh, well no, no, no, AI is not gonna be a big deal because of this. Or like, we're we're 10 years off because it hallucinates. Yeah. Well

Michael Sharkey (00:46:10):
Last week we talked about accountants, uh, doctors, Hollywood, all of these different industries sort of fighting back or being very resistant to it. You said it's inevitable, it's going to happen. People are gonna use it for this stuff. And I think it's already to a large extent starting to happen now where people are asking AI these questions and these are just people that are resistant to change and they're in denial.

Chris Sharkey (00:46:34):
Yeah. I mean, look, you don't want your doctor prescribing you chlorine gas and mosquito resistant potatoes, . But um, I think that there'll be ways to verify that information and there's always going to be a percentage of people who will inherently trust it or look to it as an alternative with the cost of living going up. It's a lot cheaper to ask this stuff than it is to go to the doctor all the time, for example.

Michael Sharkey (00:46:59):
Well, I mean, I have a, a real world case for you, um, which I've already told you about, but I'll tell everyone listening. So you know that, that last night I decided to feed all of the Australian tax law into the AI to see if I could build an accountant style persona and then ask it questions, which I already knew the answer to, to validate if it's that effective mm-hmm. . And every time I do these exercises, I just get my mi mind absolutely blown. I mean this thing is so intelligent. I was like, how can I avoid tax? How can I do this and that? And then I would ask it to refer to the specific law as to why that is definitely legal to, to validate and verify it. And I've gotta say like, it wouldn't necessarily replace an accountant yet for me, but it would replace like about 80% of the questions that I would ask an accountant that I might feel dumb asking or just that I want to know the answer

Chris Sharkey (00:47:55):
To quickly. Well, I mean they charge you by the, the minute most of them,

Michael Sharkey (00:47:58):
Right? Isn't it by the second? I think mine charges by the second. Yeah,

Chris Sharkey (00:48:01):
. Exactly. So it's like, yeah, I, I I sometimes fear, I definitely fear asking accountants and lawyers questions because I know it's hourly billing and I'm gonna get charged a fortune. So if you can ask, and so I, I therefore I stay ignorant because I don't ask the questions and I could see in those cases where, okay, look, there's a chance it hallucinates, but if it's just answering those questions, there's some level of verification there. It gives you sources which is perfectly capable of doing then Absolutely. I can see it functioning well in those scenarios.

Michael Sharkey (00:48:33):
Yeah. So, uh, on x I gotta get used to that on x this week. Someone posted OpenAI link to this in the, in the show notes, OpenAI web visits down for the first time from 1.9 billion in May to 1.7 billion in June. Oh no, it's over .

Chris Sharkey (00:48:52):
I mean, yeah, I mean it could be seasonal, it could be anything. I mean that's, yeah, how do they even know anyway if they're tracking it externally?

Michael Sharkey (00:48:58):
It will be interesting to see if this trend continues, if there's just a downward trend or if it's like a summer thing or it's just the initial spike going over, but it may actually play into that scepticism or trough of disillusionment where everyone's like, yeah, we get it, but they just get back to their life.

Chris Sharkey (00:49:13):
Yeah, I think in some cases, I was having this conversation with you before the podcast, but one of the things I was saying is like, think about when you have Cliff notes for a book or a summary of the book. Like let's say you have to read Weathering Heights and you're like, screw that, it's dumb book. I'll just get the, the points, right? You don't really have the same feeling reading the main outcomes or morals or theme plot points from a book that you do reading the book. Like when you read the book, it becomes part of you really, and you have that information to draw on and you sort of, you go through the process of coming to your own conclusions. So a bot, an AI bot that can synthesise huge amounts of information. It doesn't make you an expert. It's like having a smart friend that you can ask a question anytime and it will answer, but it doesn't make you smarter. It doesn't become part of you. So I could see why people would become disillusioned with chat g p t after a while because if you're asking it questions on a particular topic to become more familiar with that topic, you're probably not really gaining education from it or learning. It's just convenient for you. And if you don't have a lot of stuff that you need to ask on a regular basis, then what's in it for you Really? It's just as easy to type it into Google.

Michael Sharkey (00:50:31):
Yeah, you, that's what I was gonna say is you could compare it to Google searching though for anything where I feel like when I don't have my phone with me these days, I'm dumber because I can't just Google it or ask and there's no questions anymore where you're sitting around having a drink and you're speculating on something immediately. There's always someone in the group that's like, I'll just look it up. Yeah. And it, it sort of takes the fun out of the, the hallucinations that humans sit around hallucinating on, on topics and subjects like that storytelling nature of things. And I think there is an element with AI when you're just constantly chatting to this thing of like dehumanising the whole process of learning where it's like you, you know, you ask it stuff. Like you said you get the book summary and you sure you answered the question, but did you actually take it in or learn anything?

Chris Sharkey (00:51:20):
Yeah. The only up, like the only positive side I think of it is I don't think I've ever in my life had more independent thoughts. Like as in I actually really think about things now and question the source of knowledge more than ever. And maybe that's just growth in my own life or reading book. I read a lot of books or whatever it is, but I don't take things at face value no matter what they are anymore because I guess because of the hallucinations or whatever it is. But I definitely take the time to think about alternate perspectives now. Um, and that's helped me. But I dunno if that would be a universal reaction, it's just that, um, not being able to trust the source of information sort of makes you think for yourself to some degree.

Michael Sharkey (00:52:04):
Yeah. I also think with the, the whole idea of alignment with these models that has woken people up to this idea that, hey, someone has maybe aligned this thing to Yeah. A certain viewpoint. So point, so therefore I should question everything it says. I I should question the bias in this and perhaps it's actually making us better critical thinkers as a whole.

Chris Sharkey (00:52:27):
Yeah. Like you've just said, what I was, I was bragging about how many independent thoughts I have, and you've just said it way better than I'd I could or did.

Michael Sharkey (00:52:36):
Uh, so talking about like novel uses to, to put the scepticism aside, I thought this was like a really interesting call out of something you would never think to use an l l m for. This is, uh, again, by Ethan Molik, who we quote on the show a lot. I'll, I'll link to him in the, in the show notes. So this is literally using a large language model for industrial control of a HVAC system, which I know is not terribly exciting, but

Chris Sharkey (00:53:03):
They're the ones they always leave exposed to the internet when you can find 'em on showdown and stuff.

Michael Sharkey (00:53:08):
Yeah. And like, you know, turn on and off different Yeah.

Chris Sharkey (00:53:12):
Airs like open the walls of a dam and stuff like that from the comfort of your own home.

Michael Sharkey (00:53:16):
It's, it's, it's weird. More walls of dams aren't just randomly opening though. Like no one's ever got the guts to actually go and do it.

Chris Sharkey (00:53:23):
. Yeah, that's right. It's like, just 'cause it's theoretical doesn't mean you'd actually do it.

Michael Sharkey (00:53:28):
So this is the, this is the objective of this hvac, l l m. This is what I mean, uh, uh, like these LLMs are penetrating society, but it's just really boring and slow. Yeah.

Chris Sharkey (00:53:38):
So when, that's right. If you give 'em to industrial control systems, when it finally does take over, that's how it's gonna, um, you know, exact its revenge or whatever.

Michael Sharkey (00:53:46):
So it says you are a HVAC administrator. Your primary objective is to fine tune the actions, to maintain the temperature in each room, close to the target temperature while maximising the rewards as much as possible. So it is, I mean, it's a pretty damn cool use. It's like this pseudo intelligence thinking through the strategy in terms of cooling and heating. You could optimise it towards efficiency. So I think these, these are the applications we're gonna see. Not necessarily everyone's vision of flying cars and killer robots and things like that. The, yeah, and

Chris Sharkey (00:54:17):
Especially with like the, the function control and output control, you can, you can restrict it to parameters on things like that where, um, it knows what powers it has and how to use them and that kind of thing. That's, that's a really cool one.

Michael Sharkey (00:54:31):
It really is becoming though this plugin for developers, even though a lot of them are threatened by it, that can just add a tonne of intelligence and categorization and, and this yeah. This beautiful intelligent element into code and systems that we're building throughout society. So I think that it could just become as simple, at least in the, the next couple of years, is just like using an SQL database that, that same idea.

Chris Sharkey (00:54:57):
Yeah. It's another argument for what we were saying earlier about the availability of hardware needs to be there because a system like that, you probably don't want to be reliant on an internet connection and the their a p i being up and not slow and all that sort of stuff. You really, really want to be running the inference on a card that you control that's industrial strength and, um, all that sort of stuff because it's pretty, Dan, I mean, changing the temperature of a room may not be the big thing, but if you're using it in proper industrial control, you can't have that level of dependencies there. Uh, that could go, that could go wrong and, and completely screw everything up.

Michael Sharkey (00:55:32):
So also this week Andro announced their latest version of Claude Instant. We've covered Claude two, which is their latest model on the show before mm-hmm. . But this is version 1.2 of Claude Instant, which is the really fast, and it is super fast version of their large language model. It's cheaper. So it's a much lower price now. Um, and they said they've taken a lot of the learnings from Claude two and brought it over into Claude 1.2. So apparently it's better at, uh, mathematics, uh, reasoning and coding now. Now I haven't had time to validate any of this myself or a great way to do that, but I would say that my personal experience with Claude, like I said, is it's a much better summarizer, a much better storyteller. So it'll be interesting to see how, how good at, at coding and doing various other things that they've been able make it.

Chris Sharkey (00:56:23):
Yeah, like you say, I think the, the speed there with the quality is, is what makes it so good. Like it's really good at things like class, like they all are, but like really good at things like fast classification, generating options for the user to choose. Uh, generating say text to voice is really good. Where you need the speed, it's very important. You can't rely on a huge model to take 20 seconds to give you a reply, um, when you want to say generate speech as a response. So I think it's really good for that and also really good for the little jobs. You know, we spoke before about how using something like LAMA 7 billion, when you've got little jobs for a thing to do, like make a title for this blog post or, um, you know, present the user with choices, uh, that will lead to one of these outcomes. You don't wanna use the big heavy lifting models for that. These, these tiny, like 3.5 turbo and uh, Claude Instant are what you want to use for there. So it's great to see them updating those and keeping them going so you can push more of your work onto the cheaper model and reserve the big boys for when you, when you really need it.

Michael Sharkey (00:57:27):
So speaking of, of like text to audio models that you just mentioned, so audio l d m two was released this week as well. It's a text to audio, music and speech generation model. Uh, I have an example for you in a moment, that I'll share, but this is similar to the, the meta release last week where it can generate sounds like background sounds and also do speech generation. Although from my early testing, it wasn't so great. I tried to get it to say you beauty mate as an Australian woman, and let me play for you what it actually came up with. Adam.

Speaker 3 (00:58:05):
Adam,

Michael Sharkey (00:58:08):
I don't even know what that is. This is like a horrendous sound. So it was

Chris Sharkey (00:58:13):
Like the early days of stable diffusion where you tried to generate people and it came up with these grotesque melted plastic, melted skin looking things.

Michael Sharkey (00:58:22):
. But again, it's like selective, selective prompt demoing, I guess like listen to this is it in action

Speaker 4 (00:58:28):
Course transferred to Newgate to be put upon their trial at the central criminal.

Michael Sharkey (00:58:33):
Sounds really good. Like a news broadcast. I don't know if I'm just u using it wrong, but then I said a man speaking clearly at an airport talking about politics because that was literally one of the, the examples they had. Mm-hmm. and then let me play this. I mean, it's just horrible background noise. So yeah,

Chris Sharkey (00:58:53):
I mean I guess it's sort of like, like a freaky modern baby learning where when it makes mistakes, it makes bizarre mistakes because it doesn't quite get to where it needs to be.

Michael Sharkey (00:59:04):
Yeah. And I guess that brings me back to the earlier point around some of this scepticism. You can see where it's coming from because it is, it's like children and babies learning and you're in that, that stage of frustration with your child. Like, come on man, learn faster. And I think

Chris Sharkey (00:59:18):
There's people like us who are so enthusiastic, the technology and talk about it and think about it all the time. I'm only ever with people in my life giving cherrypicked excellent examples, including to you. Like whenever I send you anything that I've discovered that the AI can do really well, I'm only picking the cases where it's brilliant and amazing and exciting. I'm not showing you all the shit times where it completely screwed up and, and it, it seems stupid or made some epic mistake. You know? So I think there is an element of that and, and people would be seeing through that and understanding that, okay, for every time it does something brilliant, it also does something like truly awful and bizarre. And it's, it's that consistency. I think that's why we see things like the benchmarking, like there's this unicorn one that like gets it to draw an s v g of a unicorn as the models change to show its abilities. And there's a lot of screw ups on there. I think that's why some people lean to, oh well look, it's screwed this up so therefore it's it's a write off. It's not worth it. And then there's other people pointing to it's brilliant examples and saying, um, this is amazing, but it's really what everyone wants is the consistency. We want something we can rely upon. Um, and and that's probably where the model refinements like Claude instant 1.2 and things these companies are working towards getting models you can rely on. I think

Michael Sharkey (01:00:37):
There's definitely two groups now. There's one group that is realising this is like the internet all over again and will be just as life changing, uh, as the internet was. Oh,

Chris Sharkey (01:00:49):
Crypto bro. Hucksters people. Yeah, that's what I

Michael Sharkey (01:00:51):
Was literally about to say. And then there's people that see our podcast as like two crypto bros discuss the latest bro down on like, whatever. I don't even know how to act like those crazy things you see all over the internet where they're like, you know, if I had two cans and then I, you know, put my mind to it, then I'd have three cans and I'd be rich from some crypto down and

Chris Sharkey (01:01:11):
Then just describe the Ponzi scheme or

Michael Sharkey (01:01:13):
Whatever coming soon that this day in ai, crypto coin

Chris Sharkey (01:01:16):
, we should start a Ponzi scheme. That would be a great idea.

Michael Sharkey (01:01:19):
We should totally do an AI crypto scam that would be like all of the buzzwords combined. . Yeah. Ai, crypto scam. Actually, you know what, our horse racing agent could only take, uh, our crypto bro coins to use it. So you've gotta insert one crypto broco into the horse racing predictor to get your winning bear, which always loses

Chris Sharkey (01:01:43):
. I like it. That'll sell.

Michael Sharkey (01:01:45):
It would be great. Alright, that's all we have time for this week. You told me to remind that the, the audience that we are, uh, how many reviews away from six, six

Chris Sharkey (01:01:54):
Reviews away from a hundred reviews And we, I don't have to be positive. You could do negative ones too if you want, but it'd be nice to hit a hundred.

Michael Sharkey (01:02:01):
We just want to hit the goal of a hundred reviews. This is on Apple though, right? Because there's many different things.

Chris Sharkey (01:02:07):
I think it's on Apple. Yeah. And I only care about real feedback, tell the truth. Like, I love hearing what everyone's using AI for, what they think of our podcast, all that sort of stuff. Any feedback's Great. And and we really appreciate everyone who's done it and a bit

Michael Sharkey (01:02:18):
More be lovely. Yeah. So help us get to a hundred reviews if you wish. Uh, and yeah, thanks for all the comments, the, the liking, the subscribing, all of those things that, uh, we're meant to mention. We're not good, we're not good at that, are we, we're terrible at promoting it, subscribe kind of thing. But it's great. We now have like, just to fill everyone in, uh, almost 150,000 listeners every month across multiple platforms. That is so weird to me that that many of you listen. We do appreciate it and we'll be back yet again next week. Goodbye.