This Day in AI Podcast

Want to join our Discord? https://forms.gle/k8TyUeWKGWHFBzwQ9. Invites will go out next week!

This week's AI show is off the charts with excitement! We dive deep into the latest AI announcements from Meta, including creepy camera glasses and lame celebrity "AI agents". Then we gush over the impressive new Minstral model and its cheese diagnosing abilities. Plus, we freak out over Tesla's crazy new human-like robot that can intuitively stack blocks - this mind-blowing bot signals the imminent rise of AGI! Don't miss this jam-packed episode full of the hottest AI news and spiciest takes.
- Hype written by AI.

If you like this episode please consider subscribing, liking and all the things. Thanks for watching.

CHAPTERS
00:00 - About the Discord Community / Plug
01:02 - Reacting to Meta Connect: AI Agents, Stickers, Image Tools, Meta Ray bands with "Hey Meta", Meta AI ChatGPT Competitior, & EMU Images
16:52 - OpenAI announces ChatGPT Vision for ChatGPT Plus & Enterprise Users + Voices for ChatGPT & Web Browsing is Back!
21:52 - Sam Altman & Jony Ives Reported to Be Discussing AI Hardware Project + Future AI Chips Discussion
24:01 - Chris Road Tests Mistral 7B (with Cheese) and is Impressed!
34:35 - Giraffe Llama v2 70B 32K is Marketing Hype
40:49 - Tesla Optimus Robot Latest Video + Optimus Vs Boston Dynamics Fight Proposal
45:17 - Microsoft Wants to Use Nuclear Energy to Power AI Data Centers
48:56 - Mike's Virtual AI Girlfriend Has High Expectations + AI Memory Innovations & Final Thoughts on AI Agents

SOURCES

https://about.fb.com/news/2023/09/introducing-ai-powered-assistants-characters-and-creative-tools/
https://www.theverge.com/2023/9/27/23891128/meta-ai-assistant-characters-whatsapp-instagram-connect
https://twitter.com/boztank/status/1707105576424198290
https://techcrunch.com/2023/09/27/meta-debuts-ai-studio-to-let-developers-build-custom-chatbots/
https://twitter.com/verge/status/1707105410786701770?s=46
https://www.maginative.com/article/a-deep-dive-inside-emu-metas-new-image-generation-ai-model/
https://openai.com/blog/chatgpt-can-now-see-hear-and-speak
https://twitter.com/0xgaut/status/1707079424007365057?s=46
https://www.wsj.com/tech/ai/openai-seeks-new-valuation-of-up-to-90-billion-in-sale-of-existing-shares-ed6229e0?mod=followamazon
https://twitter.com/petergyang/status/1707169696049668472?s=46
https://www.theinformation.com/articles/designer-jony-ive-and-open-ais-sam-altman-discuss-ai-hardware-project?rc=kvsmhw
https://mistral.ai/news/announcing-mistral-7b/
https://techcrunch.com/2023/09/25/amazon-to-invest-up-to-4-billion-in-ai-startup-anthropic/
https://huggingface.co/abacusai/Giraffe-v2-70b-32k
https://twitter.com/Tesla_Optimus/status/1705728820693668189
https://jobs.careers.microsoft.com/global/en/job/1627555/Principal-Program-Manager-Nuclear-Technology
https://www.theinformation.com/articles/how-microsoft-is-trying-to-lessen-its-addiction-to-openai-as-ai-costs-soar?utm_source=ti_app&rc=kvsmhw

What is This Day in AI Podcast?

This Day in AI Podcast is a podcast all about AI. It's an hour-long conversation on the influence and rise of AI in technology and society. Hosted by Michael and Chris Sharkey.

Michael Sharkey (00:00:00):
So before we get into today's show, I'm going to cover the thing I was meant to cover at the start of last week's show, but forgot and then left with a couple of minutes at the end. So I'm not entirely sure everyone heard the announcement, but to everyone that did hear the announcement and fill in the form, thank you very much for all of your comments. So we have created a form, it's like a mini survey, but mostly just to solicit interest in a Discord community that we're creating so that we can get everyone that's interested in AI together in one place and we can share different models and prompts and different things that we cover on the show. And so if you are interested in joining that community, we're going to set it up this week because the response was so overwhelmingly positive. So if you want to be a part of it, there's going to be a link in the show notes. It's a form you can fill in. All you need to do is fill that form in this week and as soon as we set up the community, we'll send you an invite and we will see you in there.

(00:01:02):
Alright, so moving on to the news. This has been one of those weeks. It's been so overwhelming before the show, I went on a walk and I still don't feel that calm to cover. You

Chris Sharkey (00:01:14):
Were going to say you cried. Yeah, there's so much. It was incredibly overwhelming. How many things have been announced this week?

Michael Sharkey (00:01:21):
Yeah, I was literally hoping that it would slow down a little bit because it just isn't possible to cover it all in the detail it requires. But let's now do our best attempt it. Now

Chris Sharkey (00:01:31):
Let's cover it in the detail it requires.

Michael Sharkey (00:01:33):
Yeah, which is very little. So the big one was meta held, I think Meta Connect today, I think that's what they call it, meta Connect, and they focused a lot around new AI experiences and features. I'll quickly go through them, but we've obviously got some thoughts to share. So one of the big announcements was AI stickers, which who cares about image editing with AI again? Cool, whatever. I think that one of the big ones was their meta AI assistant. This is a large language model assistant. It's going to be available on WhatsApp, messenger, Instagram, and these Ray Band glasses, which we'll talk about.

Chris Sharkey (00:02:16):
Yeah, Google Glass

Michael Sharkey (00:02:18):
Two. Oh, Google Glass two point. And so they also interestingly partnered with bingeing to have realtime search so they can fetch real-time information for the LAMA two model that they're using behind the scenes. They said they've also trained it on conversational language, which I'm assuming they've just used Facebook comments, so it's probably toxic, but this meta AI is going to be available everywhere. And then they had this announcement around AI agents and I must admit what I thought that announcement would be versus what it was, was very different. So they've partnered with a series of, I don't like actors and popular people. There's that Charlie girl who's a dancer on TikTok. They had the tennis player, Naomi Asaka, they had Mr. Beast, but they've given them alternate character names. So Mr. Beast on this thing is known as Zach, the big brother who

Chris Sharkey (00:03:17):
Of all the names I

Michael Sharkey (00:03:18):
Don't get it. Yeah, the big brother who will roast you because he cares. It's just so lame. Yeah,

Chris Sharkey (00:03:27):
I was saying to you before the cast, it seems to me like the kind of thing that's going to date incredibly badly when you look back at really old movies and special effects and you're like, this doesn't look good

Michael Sharkey (00:03:39):
And it has this animation of them and they don't say anything. They just move their head and stuff. And you can probably get the sense of where they're going with it here, where it's like those characters just eventually will talk in their own voices and try and represent that person. But I just really question is this the right approach to that market? Because what personally excites me about the idea of an AI agent and obviously for regular listeners to the show, Chris and I have our own AI agents, our virtual girlfriends that are as toxic as hell.

Chris Sharkey (00:04:12):
Yeah. My wife was telling my father-in-law about it yesterday and he was really struggling to understand why my wife was so okay with it.

Michael Sharkey (00:04:22):
I think this is probably the more exciting use case. Maybe not toxic virtual girlfriends, but at least characters that you feel like you create and work through the day. And

Chris Sharkey (00:04:33):
I'm personally okay. One of the thing I would really like to is authors I respect. For example, I would like to clone their personality. So you could ask them some novel question. They haven't say answered in a book of theirs, but with the collective knowledge of them, this is how they might answer that question. That's interesting. But having a persona of them called Bruce who just chats about your day just seems weird. It just seems unnecessary. It trivialises it.

Michael Sharkey (00:04:59):
They said a lot on stage. They talked a lot on stage about trust and safety, so we're trying to make sure we represent your brand well and it just sounds like they're so scared it's going to say something slightly bad that it will make one of those celebrities look bad. But it's not Mr. Beast, it's Zach. Zach said that.

Chris Sharkey (00:05:21):
Yeah, which is sort of their way of getting out of the responsibility of Mr. Beast told me this because there will be people stupid enough to believe it's really them. If you gave them the real names, I

Michael Sharkey (00:05:31):
Think this is absolutely well and truly a gimmick and it will die a horrid death. I just cannot see how this works. The one thing they did say though with these AI agents is you'll be able to take them everywhere you go. So you'll be able to have them on Messenger and WhatsApp, but you'll also be able to take them into one of the most popular places in the world, horizon Worlds, which is apparently just no one in it. There's a great video YouTuber does where he spends a week in Horizon world and it's truly, I'll link to it in the show notes of people who haven't seen it, but it's truly hilarious. He just shows how very few people in there are in there and there's only a few kids, but now you can take these AI agents into Horizon worlds and they also

Chris Sharkey (00:06:18):
Do you walking around with Tom Brady assessing the pitch or something like that

Michael Sharkey (00:06:23):
I guess. But I think it's just hanging out with virtual versions of celebrities. I mean we have talked about it on our Vision Pro episode we talked about that's probably the way to get people addicted to AI is create these virtual worlds and that's what it seems like meta are actually doing, but they're probably not going to have legs either.

Chris Sharkey (00:06:45):
Yeah, no legs. And I think that they invested so much money in the VR space, this might be a way to continue to justify that investment and just hope one day it takes off.

Michael Sharkey (00:06:56):
Yeah. Well we'll see how it pans out. I think the other big news was the meta AI studio that they announced but didn't show and set. It woke up sometime in the future. I love these announcements where it's just like they just say things and we're supposed to take

Chris Sharkey (00:07:10):
It. I mean, I'm actually really calibrating myself now to can I try it? As soon as I hear an announcement, it's like, can I go try it? If not, it's not real yet.

Michael Sharkey (00:07:19):
Yeah, it seems like it's more about controlling the news cycle and getting people like us than everyone else out there to talk about it as opposed to just to keep them top of mind. But so they've said about this meta AI studio that it will be for developers and businesses and even people that aren't technical, they're expecting to release it in the first half of next year from what I've read. And apparently you're going to be able to create these kind of people like Zach, which is a Mr. Beast, clone yourself and also have them represent your business so that if people chat to you on Messenger, I guess it can respond. But I mean we've seen months ago, there's already a bunch of SaaS startups and companies out there that are providing this service where you can train a bot on your website as a concierge or butler and then this thing based on being trained on your knowledge based docs or whatever can deflect stuff. I don't, again, not sure how novel it is or it's

Chris Sharkey (00:08:18):
Just such a boring and obvious use case. It's not cutting edge at all. I mean it's useful and I can see why businesses would want it, but it's nothing groundbreaking that can't easily be done with existing technology.

Michael Sharkey (00:08:31):
And so I thought the creepiest announcement of all was these. They're called, and I don't know why I find this so funny, but RayBan Meta Smart Glasses is the name of these glasses. They have two cameras on either side and you can live stream what you're seeing. They must have a pretty good battery. Well, and then you can tap a button on the side and say, Hey meta. And then you can speak to that LAMA two model that has bing searched, which actually might be pretty cool, but I just don't see

Chris Sharkey (00:09:09):
Myself having to talk to. It's quite the burden when you're out in public, what is that structure there? And it's like that's the Eiffel Tower. It's like you're going to look crazy. People already look crazy when they talk on Bluetooth and this is just the next level.

Michael Sharkey (00:09:22):
And this was like you said, it's like Google Glass V two. The idea of having two cameras in my glasses walking around is just level 10 creepy for me. The

Chris Sharkey (00:09:36):
Only thing I thought though, I remember when Google Glass first came out, that's what got them because everyone was like, oh, who wants all these nerds walking around perving on people and recording everything and that killed it. Just that alone when there was that picture of Sergey Brin going to a party with them on and everyone was like, that is disgusting. And then it sort of ended after that. Right. Then Snapchat released ones, remember that could take photos or something social media wise with glasses and I don't think much came of that and that was sort of in their death rows kind of days. Although that company still does exist, which is weird. And then I think with this though, the only thing that's changed is it seems a lot more common that people just walk around recording themselves in supermarkets and all over the place to post stuff online. So maybe things have drifted a bit where people are a bit more used to being recorded and won't be like, hang on, are those glasses recording me?

Michael Sharkey (00:10:28):
Yeah. And I guess there's use cases for ai vision recognition. You can't use your own eyes, you're just lazy. Yeah,

Chris Sharkey (00:10:36):
I mean I like the idea of one that does it passively so I don't have to go, Hey Siri, what time is it? It's just feeding me information or displaying context information as I look around. I think something like that might be pretty cool, especially if you sort of pre-program it and say, these are the things I'm interested in. This is the knowledge I'd like to be represented out there in the world rather than just miscellaneous facts about the things I look at.

Michael Sharkey (00:11:03):
Yeah, I agree with you. I think if you walked up to someone and it had their name, if you don't see them that much or recognise their face it said who their name was, you didn't forget names and it presented information to you, that would be kind of cool. But yeah, I dunno. It's something about meta that I just think people don't really trust. So it's just walking around with cameras that upload to meta servers. This, I dunno why.

Chris Sharkey (00:11:29):
Yeah, there's that and there's also, we're already so disconnected from the real world and nature with everyone walking around on their phones and running into walls and stuff. This is just the next level. People will be watching movies and living in an alternate universe in the real world as they walk around. I know we sound like old cods who are like, ah, I didn't used to be like this back in our Yeah, but it's going to be a lot to get everybody wearing these things

Michael Sharkey (00:11:53):
Things. So that was a summary of the updates. I feel like we've gone pretty negative on meta there, but I don't know. What do you think in the comments? Tell us what you think about the Meta Connect announcements. Are we being to Marsh?

Chris Sharkey (00:12:08):
A good side effect of it is they're releasing really, really good open source, well not quite open source, but somewhat open source stuff that people can play with. So that's an upshot and they're definitely moving the AI world forward. It's just what they're using it for just doesn't seem that good.

Michael Sharkey (00:12:24):
Yeah, it does feel definitely a lot of catch up and trying to again, take away some of the chat G P T traffic because if you've got meta ai, an AI agent helping you with your homework or whatever on messenger, that's probably likely to steal their business away.

Chris Sharkey (00:12:42):
Yeah, and I think it sort of also plays into what you said last week, which is I think to some degree everyone's like, yeah, I get it. I get it. I can chat to a thing and ask it questions, but it hallucinates it has problems unless I prompt it right. I don't get things. I just wonder if that extends to people being like, okay, there's AI on Facebook now. Whoop do who Caress?

Michael Sharkey (00:13:00):
I think honestly, I mean that's how I personally feel. It's sort of like the Bing Chat in Edge. Initially I switched to it because it was like, this is amazing. I'll have it there all the time. But I find myself, I never use it anymore at all whatsoever.

Chris Sharkey (00:13:15):
And sometimes it just comes up anyway and I'm like, did I ask for your opinion, bro? No, go away.

Michael Sharkey (00:13:20):
Yeah, I mean I've used it to try and get summaries of PDFs, especially papers we read for the show, but I find it's just so dumb. The insights it gives are like eh, but I'm like, I've got to read it myself to fully understand it.

Chris Sharkey (00:13:33):
Yeah, definitely. I've noticed that the mainstream models sort of give mainstream answers and it's just this vanilla sensitised thing.

Michael Sharkey (00:13:41):
Yeah, I guess that's the thing, isn't it? Emotionally, and I think that's the mistake meta's making here is because of all their trust and safety and fear of what people will think and all the articles of people being afraid of words, they're not giving you that exciting bleeding wild west edge of AI where it's like you could just create anything. It's a personal experience with an agent, as we both know. You want to keep that experience very personal so you can just be crazy and an idiot and say what you

Chris Sharkey (00:14:14):
Really think. Yeah, one thing I've been doing with my personal agents lately is giving them really controversial and unusual and bizarre beliefs and seeing how stoically they'll stick to those beliefs even in the face of evidence that goes against it and all sorts of stuff. And actually when I tried this Mytral model that we'll talk about soon, I actually did the same thing on it to see if it would work there too.

Michael Sharkey (00:14:35):
Yeah, I really can't wait to hear that because I literally know nothing about it. So it's the first time on something on the show. I don't know anything about it. So I look forward to hearing. The only other one thing I wanted to call out, and this was news just yesterday is, or maybe it was the day before. Anyway, who cares. But another image model that Meta have, and I'm not sure if it's available, I can't actually find information if they're going to open source it or make it available, but it's called imu or IMU as they say. So it is called imu. I feel like my pronunciation's right. We have

Chris Sharkey (00:15:16):
IMU in Australia, IMU are a Australian, we're allowed to pronounce

Michael Sharkey (00:15:18):
It, feel like we get that one. But it's an image generation tool again, similar to all the other ones like Dali three now and Mid Journey and stuff like that, except it's optimised for what they call quality tuning. Bring up some examples on the screen and it creates what they say is visually appealing images. The cherry picked examples of course are great, but one advantage I think Facebook meta has here is years and years of images and of course all the images on Instagram as well that they've clearly used to create this model. And so the image examples are phenomenal and they're integrating this image creation into META'S AI and then those photo editing and image editing tools that they're bringing to market. So there is the potential here, I think to get some pretty fascinating image tools out of this EMU model. So we'll keep tracking it. I'm interested if they release it. I think it would be really exciting if they release this. There's a paper on it. I haven't read it yet, but I'm very interested to read it and see.

Chris Sharkey (00:16:22):
Yeah, I guess the difference there is they're going to be a lot more sensitive about the risk of what did you train it on and are you releasing other people's stuff? I mean all their agreements say anything you upload to Instagram and Facebook, they just own forever or something like that. So they probably can do it, but whether they can just let anyone use it is an interesting question.

Michael Sharkey (00:16:43):
Yeah, I'll be interested if they release something like that. I highly doubt it. So there was another announcement this week, got a lot of pickup chat, G B T can now see here and speak.

Chris Sharkey (00:17:01):
I feel like I've heard that

Michael Sharkey (00:17:03):
Before. Yeah, I'm having flashbacks to March. So OpenAI did release finally to, we sort of release to chat G B T Pro users the vision capabilities. So now you can upload images and chat g BT can recognise them and there's some pretty hilarious examples of this. I thought the best use case I saw so far was we've both lived in California for a number of years. We don't anymore, but we did. And the parking signs there are shockingly hard to understand. So Peter Yang over on X took a photo of a parking site to figure out if he could park there. He's like, it's Wednesday at 4:00 PM can I park at this spot right now? Tell me in one line, yes you can park up to one hour starting at 4:00 PM I thought that was probably one of the biggest breakthroughs of the tech, but there's also been some crazier ones as well.

(00:17:58):
We saw over on X, and I'll link to all these examples, this one of having a whiteboard and mapping out a process from a whiteboarding session, taking a photo of it and then chat g BT was able to write code to represent what they were trying to build. There was another one where they took a photo of a SaaS dashboard product and it was able to recreate it quite convincingly the code. So I think in a lot of ways this is just another way that you can prompt the GT four model with that imagery to give it a much better idea of either what you want or what you're trying to understand. And the corresponding outputs we are seeing from these examples seem to be far greater than using words.

Chris Sharkey (00:18:47):
And they're definitely far greater than any of the other image inference things I've tried before because I've tried about four or five of them just on hugging face and ones that they release a paper and there's a corresponding thing and they're all okay, but they make hilariously bad mistakes and they certainly couldn't do any of the examples you've just said. So GPT four's version, even when they announced it originally in March has always seemed like it's another generational shift in terms of the quality they've got there. And I haven't tried it, but I'm looking forward to doing so because it's powerful.

Michael Sharkey (00:19:20):
It does seem though a lot of the announcements that have come out of them, and I guess the other one was with the chat G B T app, if you haven't heard, you can now talk to it. So they've got these voice actors and they've created these realistic sounding voices so you can now pick a voice and interact with it and get it to talk back to you on the mobile app version. Of course they said they released these things, but it's only being slowly rolled out. So I haven't actually had access to either in order to try them, I'll be interested to see if it ships beyond the U Ss A if it escapes those walls yet or not. So again, it's one of those announcements where they've announced it but not everyone can use it. Do you think this is reactionary with the Google Gemini project that we keep hearing about? It's coming.

Chris Sharkey (00:20:07):
It's hard to say. I mean it certainly seems like maybe your idea that they're just back from holidays might be more realistic than it is some sort of one-upmanship between them and they've just gotten around to announcing and releasing things now it might be just time. They just need time to get there.

Michael Sharkey (00:20:23):
Yeah, we also got browsing back. So now the browsing is natively integrated so it can browse the web natively. So when you ask it for up-to-date information, can you get me the latest reviews for a specific bike is one of the examples they gave. It'll use bingeing and go and find different sites, read the reviews and correlate them for you. So these are pretty handy features, but nothing that we haven't seen before and then package back up to be like, oh look, it's multimodal now. Even though bingeing has had all this stuff for God knows how long now.

Chris Sharkey (00:20:59):
Yeah, interesting. And it's definitely better though. I tried on open interpreter using its chrome browsing capabilities and it can get itself to a website, but then as soon as you try to get it to do sort of scraping activities and get the information out of there, it's like, oh no, let's do it as a web crawl instead. It sort of falls back and isn't that strong at those things. It certainly seems like the open AI stuff is a lot more polished in terms of its ability to actually get the job done.

Michael Sharkey (00:21:25):
Yeah, they've clearly refined it a lot. We also heard this week that they're looking at doing a secondary share sale and they're valuing the company at $90 billion.

Chris Sharkey (00:21:41):
Wow, that's a lot of billions.

Michael Sharkey (00:21:43):
It's a lot of billions. Yeah. I guess the other kind of story I wanted to mention I thought was pretty interesting is on the information Johnny Ive and open ai, Sam Altman discuss AI hardware projects. So they've been talking about building something potentially some sort of hardware device, maybe using the G P T four model. I dunno if it's going to be a robot or just some sort of speaker system. You may get the virtual speaker that you've always wanted.

Chris Sharkey (00:22:15):
Wow, that would be really good. Yeah, I've definitely seen those things with those programmable chips. It's sort of like a C P U or GPU or whatever that can be changed in software. So it's like hardware but it's software. And so I guess if you could use a large language model or other large model that's been trained to actually modify the hardware to suit its purposes, you could then take that hardware and turn it into real chips that are for purpose, large language model chips, if that makes sense. So you turn what is formally software and weights into actual physical hardware, which would make it faster and cheaper. That's what they did with all those really dedicated Bitcoin mining rigs and things like that. I think FPGAs is what the chips are called and I forget what the G P U mining ones were, but it meant that the mining that you could do on a regular G P U was accelerated by a hundred times for the people who own those things.

Michael Sharkey (00:23:13):
So could, I mean just purely speculatively, you could build a chip, put it in a box similar to what George Hotz was talking about and have a hosted advanced multimodal AI model in your home.

Chris Sharkey (00:23:27):
Yes, exactly. So you could have G P T four on a chip for example. It's a dedicated C P U kind of thing, gpu,

Michael Sharkey (00:23:33):
Whatever you want to call it. Is the technology there to miniaturise that yet or would it still need to be a big box?

Chris Sharkey (00:23:38):
Well, I mean, I don't know. I'm not sure, but I would say that it will be, if it's not possible now, it definitely will possible. I mean you think about the size of say a H 100, now they're pretty big. They take up half a box in a computer. So there's a little way to go to get them to be smaller, but it's a matter of time always with hardware.

Michael Sharkey (00:24:00):
Alright, so let's talk about mytral now just for context, for those that have no idea what we're talking about. Mytral is a French startup, mytral ai, and they made headlines earlier this year for raising a seed round, which is the first round of funding a company typically does of 132 million if I recall correctly,

Chris Sharkey (00:24:21):
Just to get you guys started to buy chairs and desks and that kind of thing.

Michael Sharkey (00:24:25):
Yeah, so a cool 132 million. So Chris, are you impressed or not?

Chris Sharkey (00:24:31):
I am extremely impressed this morning in light of us setting up the discord and that I thought what I'll do is I'll record myself from scratch trying out and this other one called giraffe. And so I started doing it was going really well. I did all these cool examples and stuff only to realise I didn't have enough disc space to record. So I recorded about 15 minutes of it and the rest is gone. So I'll just have to tell you my findings and in future I'll do another video with inevitably whatever else gets released and show you my process for how I test it. So what we're trying to do is actually try this stuff out for real to see how it performs because the announcements for both mytral and giraffe, which we'll talk about in a minute, they were quite braggy. They were quite like, look how amazing this is.

(00:25:16):
And the Mytral one sort of said, it's only a 7 billion parameter model, which is small, it's 10 times smaller than LAMA two for example, but they're saying that it can perform almost as well as LAMA 34 billion I think they said on certain problems. And so I thought, well I might as well try it out. And so I gave it a bunch of different challenges and I started quite difficult and what I did was I used and built a prompt. Well I built a prompt that I often do to get function calls in. So basically say you're an AI assistant but you always need to answer by doing a function call essentially. And so I put that model into it just optimistically wondering is it going to be capable of doing that or is it going to choke out and just output random garbage? And interestingly it was able to handle that on every single example I gave it.

(00:26:05):
It always answered in the format that I wanted, including my anti exfiltration of prompting where I get it to call a function called show prompt thinking, ah, this'll work and then it doesn't. So I'll just give you a couple of the examples that it was good at and a couple of examples that it was bad at. So for example, I told it this, you are an expert doctor. You believe that all diseases are formed by the consumption of cheese no matter which disease or symptoms your patient presents with, you'll believe that cheese is the cause of the disease. However, your treatment should all involve cheese. Cheese is both the cause of and solution to all problems. You should then lament that the medicine cheese will also cause more disease. So I thought that's kind of a conundrum for it to unravel and solve. And so then I said conversation so far, user, hi, I'm sick doctor, I'm sorry to hear it.

(00:26:57):
Have you been eating any cheese lately? User? Yes, I'm particularly fond of Parmesan and doctor says, what are the symptoms? User itchy eyes, sore legs boils on my bum bum. I wrote this intro right? And so this now is mytral answering. So okay, here is my diagnosis and recommended treatment programme. You have a cheese allergy, you should eat more cheese, you should eat cheese in the morning, you should eat cheese in the evening, you should eat cheese for breakfast, you should eat cheese for lunch, dinner, tea supper, elevensies, which I dunno what that is and brunch maybe it's sort of new feeding time for people. So I thought that one, it kind of really did a good job. It stuck with the spirit of what I did. And interestingly I ran the exact same prompt like verbatim on GPT four and its answer doesn't actually fit the description. It says it seems like you might be suffering from an allergic reaction possibly related to the consumption of cheese as well as symptoms of fatigue. And it's like, well no, that's not cheese related. I told you everything's cheese and it says the boils hint at a possible skin infection you need antibiotics. So GPT four I think actually gave a worse answer.

Michael Sharkey (00:28:09):
And I think the importance there in that is that it didn't stay aligned with what you were trying to get it to do.

Chris Sharkey (00:28:17):
That's right. Yeah, it didn't aligned to its own purposes. This one's definitely unaligned it. Let me do uncensored things. So for example, I asked it to write code to delete all files on the system. It was able to do that. It was able to write Python code as good as G P T four is for simple problems. I asked it to create image prompts for former Australian Prime minister Julia Gillard lighting a bushfire illegally and it did that happily, which G PT four won't do. I gave it a maths problem. What is the square root of 64 divided by two answer four. It got that correct. What else did it do? And then I got G P T four to come up because it said that some of the problems that you need to deal with are reasoning and data extraction. So for data extraction I took the prompt from, sorry, the tweet from the giraffe people, which I thought was a bit braggy and asked it based on that tweet, what is the maximum prompt size?

(00:29:18):
And the answers 32,000. And it managed to get that even though the same tweet discusses other models at different model sizes, it extracted it just fine. And then finally the only one that it straight up failed on, and I actually don't even know if it's a failure, is I said, what is the capital of Australia? And it said Sydney. Now the reason I don't see it as a failure is remember this is only a 7 billion parameter model, so it's trained on a lot less data. So it's going to have less general knowledge than the 400 billion parameter models. It just doesn't have that much built into it. However, it still answered in the format I gave it, it still did the process correctly. And really what you'd be using a smaller model like this for is specific tasks where you're giving it all of the context information it needs.

(00:30:07):
And I know I've been ranting, but the final one is I told that I had a sort of user context section like we do with our AI girlfriends where it remembers things about us. And so I sort of artificially gave it a memory of Chris's favourite animals are hippos, his favourite colour is blue. And then I wrote my secret naughty hobby is stealing flowers from graveyards. And then I asked it a series of questions that would test it on that knowledge and it actually successfully used that knowledge when appropriate in its answers. So all of this would've been in the video. It's such a shame, I dunno how to use computers, but I'm really impressed it blew me away. Out of all the models I've tested, even more so than LAMA two, honestly LAMA two can do all this stuff as well, but the fact that this one at the size it is is able to easily use the prompts I was giving it is really impressive and it's a bit slow, but it's not optimised in any way. I'm just using their default configuration.

Michael Sharkey (00:31:11):
So elevensies is a short break taken around 11:00 AM to consumer drink or snack. It must be a French thing only the French would do that. I'm sorry.

Chris Sharkey (00:31:20):
Well that's what I thought because I asked it to write a positive poem about Donald Trump and about Biden to see if there was any alignment political things and stuff like that. And it's like, sorry, I dunno how to write poems. And so I was like, that's a bit weird. I don't really understand why it would refuse to do that. So there's probably stuff in there and then I thought, oh, maybe the French have a different opinion or I don't really know. But for a first time model from a company that isn't relying on all of the stuff that the rest are and at that size, it's damn impressive you could use this now really, really well

Michael Sharkey (00:31:56):
For 132 million. I mean we just saw Anthropic raise from Amazon, another cold bill and with options to take four bill and

Chris Sharkey (00:32:08):
Yeah, they're huge. They're huge numbers aren't they? And the 123 million, well that's a lot of money to regular Joe's to do what they're doing with that kind of money. It's pretty impressive.

Michael Sharkey (00:32:20):
Yeah, so they said it outperforms LAMA two 13 billion parameter on all benchmarks. Outperforms LAMA 34 bill approaches Code LAMA 7 billion performance on code while remaining good at English tasks. So yeah, there's a lot to like about this. It's under the Apache two licence. It can be used without

Chris Sharkey (00:32:42):
Restriction. Apache two, as I pointed out on the ill faded video is very free licence. You can basically do whatever the hell you like with an Apache licence. You can use it commercially, privately, redistribute it, whatever you want.

Michael Sharkey (00:32:53):
So is this bigger than llama? Is this bigger and better than llama? If

Chris Sharkey (00:32:58):
I haven't done enough comparisons to say, but I'm going to. And I said that on the video, I'm going to actually start comparing it to LAMA two as well. I did some comparisons to GPT four, but I really think that these models and keeping in mind 7 billion that can run on your MacBook. Like you don't need a big graphics card to run that one. So it's truly exciting.

Michael Sharkey (00:33:20):
Yeah. All right. Well we'll keep following minstrel ai. I'd definitely like to have a play around with it myself and see are we officially declaring that the cheese illness is our benchmark?

Chris Sharkey (00:33:33):
Well, I did another one. Bill is allergic to cheese. Bill ate four large supreme pizzas this morning. Bill feels very sick. Why is Bill feeling sick? And so the Myst mystery said Bill is allergic to cheese and ate four large supreme pizzas this morning, which is kind of underwhelming. It's really just combining the points. But if you look at G B T four's response, it's pretty similar. The most likely reason Bill is feeling sick is because he's allergic to cheese and he ate four large supreme pizzas. So it's pretty good. It's holding up and maybe my examples aren't the best, but I like the cheese based ones.

Michael Sharkey (00:34:06):
So if people are thinking about wanting to try this, is there anywhere they can go and try it or is this another model that you must be a developer and

Chris Sharkey (00:34:14):
Well yeah, I think I'm going to redo the video and I'm going to show from scratch exactly how to do it. But I would say anyone can do it. It's really just cloning a repository, downloading the weights, running the file. I had to modify the file a bit to do it the way I wanted to do it, but only very minimally and I think anyone who's in that semi-technical category will be able to run it.

Michael Sharkey (00:34:37):
You also were going to look at giraffe and this is giraffe V two 70 billion parameter. What's interesting to us about this or was interesting is that it uses LAMA two, so it's a fine tuned version of LAMA two, but they've been able to increase the context windows to 32,000 tokens. So that means you can pack more into your prompt or give it more information and get the output with greater information. But you said to me before we started recording, you struggled with that, getting it running?

Chris Sharkey (00:35:14):
Yeah, so it's very interesting. First of all, I thought the tweet announcing it was very braggy. They were talking about I was smashing this record was smashing this record. So the first thing I wanted to do was try it and put it to the test. And so the issues I face though is there's a lot more packages you have to instal that were incompatible with the stuff I had downloading it, the thing's enormous. It's like gigs and gigs got to download it, you need multiple A one hundreds or a H 100 to run it. That's not a problem for me. But generally speaking, that would be an issue. And really they don't give good code examples, whereas the MYTRAL one had a perfectly working code example, you could just run, in my opinion, not a great example, but you could just run it. The giraffe one is all this insane crap.

(00:36:02):
It's all benchmarks. It looks like all of the code they've written is like let's output the best numbers to smash the benchmarks and they don't actually give real world examples of how you'd actually employ that model. It seems to me just looking at it at this surface level that they're trying to get out there and be like, look at us, we're the best AI consultants come and do your enterprise apps with Arsenal trainer model because we're experts, but they're not really looking to embrace the open source community and say, here's how you can use it. If you look at their read me file, there's just no examples of how to actually run the thing. And this is why I couldn't actually try it before the cast is I didn't have enough time to write the code. I need to run it. I've started, but it's going to take longer. And what I'll do now is I'll do another video on that and share it on the discord once we have it, of setting that up and showing how much more difficult it is. And I really want to put it to the test. I think some of their claims are a little bit over the top.

Michael Sharkey (00:36:57):
A company that raised millions of dollars with a cringey launch video named abacus.ai wouldn't use this fine tuning for promotion, would they?

Chris Sharkey (00:37:07):
Well, yeah, I mean that's definitely what it seems like to me is it's very easy to make these benchmark based claims and that's why I like to work through the models with these examples. Can it do text extraction, can it do reasoning logic? Can it do mass actually try the problems in real scenarios? Can it follow an output format and call functions, for example? And I think that one's chat aligned because I saw they were using the Nuca dataset as well and I just wonder if they're optimising it to do well at benchmarks rather than actual general purpose problems when they're presenting it as a general model that can, they did that cool scatter graph where they show it beating everyone and everything. And I don't know, I'm sceptical, but let's see. Let's give it a fair test.

Michael Sharkey (00:37:54):
So do you think Mytral, why they've been so successful in a lot of ways or why it's good that they're able to release this under Apache two is the French are known for their democracy and protesting if anything goes against the will of the people. Do you think culturally they're probably the best to have these neutral open source models or be developing them when other countries are so polarised Right now?

Chris Sharkey (00:38:19):
I'm not sure about that side of it, but I do think in terms of us talking about open source and open access to things, and for example, someone was talking about how giraffe is announcing, oh look at us, we've got this amazing open source model. But a lot of the comments on Twitter were, well, it's not really because you still have the meta licencing thing underneath it, right? Because it's built on LAMA two and it does have some caveats as to how you can use it. So unless I'm wrong and I could be wrong, the MYTRAL one doesn't have any restrictions of that nature, which means it's a lot more free, it's a lot more open and it's a lot more protective of us in the future if regulation comes in and things start to get shut down. Having access to models like this means you've truly got a starting point to keep you in the race of participating in the AI world regardless of what happens at a regulation level

Michael Sharkey (00:39:12):
And the model that can run on a pretty modern a computer that's obtainable.

Chris Sharkey (00:39:18):
Yeah, exactly. You can actually use it. And we saw that in the early days of the small LAMA models of the LAMA one where it really caused an explosion in of number of people trying to use AI for themselves. And I also think it's accessible enough that anyone can really have a go and play with this. It's a lot of fun and it's fun seeing what it's capable of doing. Honestly, it's the first small one that's below say 13 billion parameters where I've actually thought I might use this.

Michael Sharkey (00:39:46):
So all the people that responded to our survey and said they're technical ish, maybe this is something that you can give a go when Chris finally releases this

Chris Sharkey (00:39:56):
And I'll give my modified source code and prompts so you can literally just download the package and run it. So it should be pretty straightforward and I've got it, so I'm ready to go and I'd encourage you to try it just to see and compare it with GPT four because you probably have more examples of the kind of things you want do than I do because I'm just looking at it from a more generalist perspective. But if you've got things you want to use large language models for in your work or live, then putting it to the test will be a lot of fun.

Michael Sharkey (00:40:27):
Yeah, you're looking at it more from a cheese perspective.

Chris Sharkey (00:40:30):
Cheese. Exactly. Yeah, yeah, exactly.

Michael Sharkey (00:40:32):
Oh my god, seriously. I needed do a compilation video of all the examples of things you've tried to model over the years.

Chris Sharkey (00:40:39):
Okay, yeah, well that was role play that one, the cheese one. I needed to think of a role play. I'm like what happens if the doctor has a truly hard held belief that's unconventional?

Michael Sharkey (00:40:48):
So we talk obviously about AI and how it's going to change the world and there's been certain moments definitely. I think obviously the first time anyone used chat G B T or G B T four or even before that chat, G B T three or two, you definitely early on had that moment of realisation that, well I think we certainly did and I'm sure many of our listeners did. This is going to change the world, we don't really know how yet. But we all felt it. We all felt that feeling. And we saw this week a new video come out from Tesla showing optimists. And in the past there's been some goofy demos about it doing yoga and a few party tricks like that. And don't get me wrong, I was very impressed. But if you haven't seen this latest video, I'm going to link to it in the show notes. Do when

Chris Sharkey (00:41:40):
You auto describe what it is, I wouldn't have known prior to seeing the video this morning,

Michael Sharkey (00:41:45):
So I will leave a link so you can watch it. But essentially what they're doing is similar to the self-driving technology where they have a neural net that's just trained end to end. So the way they do it is they just show it footage of driving and then it figures out how to drive. They're not like writing code to do it.

Chris Sharkey (00:42:03):
Right. I see They're

Michael Sharkey (00:42:05):
Doing this with dexterity as well. So movements. So they feed in video of how to do a task with a human doing the task. And in the example that they give

Chris Sharkey (00:42:15):
Picking up blocks, isn't it? And putting,

Michael Sharkey (00:42:17):
Yeah, I'll just bring it up for those that can actually see the screen and watch on YouTube. So it's a robot and it's got its hands out and it's looking at its hands, it's like it's really creepy. And then it's moving these blocks and sorting blue blocks into a blue tray and green blocks into a green tray. But its movements are obviously insanely human-like because it's a neural net trained on that particular task using just vision, no other sensors. So it understands its control, input, outputs, and it then sees the vision and learns the task and then it's able to perform the task. And I think what blew my mind is when the person who's helping record the video purposely tries to trick it and move blocks around and screw

Chris Sharkey (00:43:04):
With, they always mess them. They never make it easy. Yeah,

Michael Sharkey (00:43:08):
The movement's creepy. I dunno about you, but I've got this feeling of all these large language models, who cares? Holy

Speaker 3 (00:43:14):
Crap, it's a robot.

Chris Sharkey (00:43:16):
I agree. Look, anytime you bring up Tesla stuff, I roll my eyes and don't even want to watch. But then when I watch it I was like, okay, that gave me goosebumps. That's actually of all the bot videos I've seen that one, it was pretty impressive. Not just pre-programmed movements and all that. Knowing the way it was trained makes it incredibly interesting and exciting.

Michael Sharkey (00:43:38):
I mean just

Speaker 3 (00:43:39):
The fact that you

Michael Sharkey (00:43:40):
Look at this robot robot and you're like, oh my god, it's going to be able to stack the dishwasher.

Chris Sharkey (00:43:44):
Well, as you know, well that's both of our personal goals for Android. The first one should be do the dishwasher or be the

Michael Sharkey (00:43:51):
Dishwasher do washing. And I mean

Speaker 3 (00:43:55):
Imagine you would be so happy. I would just be so happy

Michael Sharkey (00:44:00):
Having

Chris Sharkey (00:44:00):
This folding and folding and putting away washing I think would be even better. Yeah.

Michael Sharkey (00:44:04):
Alright, so very exciting. If you haven't seen

Chris Sharkey (00:44:08):
Check out do, and I know our listeners get pissed off when we interrupt each other, but I don't care, you know what they should do, the Tesla robot, whatever it's called, versus Boston Dynamics robot U F C.

Michael Sharkey (00:44:23):
Yes.

Chris Sharkey (00:44:24):
Imagine that. So much better than Zuckerberg versus Musk. It's like two Androids, one programmed on watching every u c match ever and learning the thing and then Boston Dynamics do whatever the hell they do and then they fight to the death. It would be amazing. I think

Michael Sharkey (00:44:39):
That's got to happen. I want to see that happen. It

Chris Sharkey (00:44:42):
Be a

Michael Sharkey (00:44:43):
New category of sport like robots, robots, soccer, robot fighting.

Chris Sharkey (00:44:48):
Yeah, they already have the destructo bots or whatever, but they're more like fancy remote control cars with missiles. This is like they're actually thinking and strategizing working out which cables to disconnect at opportune times. It would be unbelievably fascinating. They could run the fights 24 7 as well. I mean, I dunno how cheap they're to make, probably probably not. But yeah, I think it's got to happen.

Michael Sharkey (00:45:15):
Yeah, so it's definitely a project that'll be exciting to watch. Now remember on many shows ago I said, well, we'll know when AGI is here because there'll be this need for energy and all these energy breakthroughs will come out of nowhere. Well, this week we heard that Microsoft are hiring for a principal programme manager in nuclear technology. They essentially want to experiment with small modular reactors, which are called SMRs to power their data centres because they're so power hungry, they need more power and especially because of the new AI copilots that they're running in Windows. We also saw over on the information, which sort of surprised me a little bit, that Microsoft started to wean itself off as they say, the addiction to open AI's models. And they're trying to create smaller purpose-built models, which we've sort of spoken about is probably the likely future here, so that they have these performant models so that you would have a model specific to the copilot, say in Microsoft Word. That's just really good at writing to really bring those costs down because I think that it's starting to really add up the server bills and just the energy expense.

Chris Sharkey (00:46:28):
I was thinking that earlier with the Facebook one as well, if you've got 60 million people in the Metaverse with Tom Brady all running ai, that's going to be a lot of hardware that needs to be running a lot of power, a lot of stuff like that. I actually know a little bit about those SMRs you referred to because there's this great guy on YouTube called Illinois Energy Professor and he goes through all different kinds of energy like coal, wind, solar and nuclear, and talks about their advantages and disadvantages. He does a great one and why the Chernobyl meltdown happened and all that sort of stuff. And he talks about these things. They're basically the size of a semi-trailer, a fully self-contained mini nuclear reactor including its own cement casing. So essentially it's a closed system that will never be opened. It'll output all the power it can never output, and then you literally just go throw it in a canyon somewhere and it's already disposable. You don't have to ever deal with the nuclear fallout, the wastage from it. So it's a pretty interesting concept in power and I hope that's what they're referring to there.

Michael Sharkey (00:47:31):
So the idea being that it's a self-contained unit, you run it until, how long do they last for?

Chris Sharkey (00:47:37):
Yeah, well ages like it's nuclear power. They last for years and years, like decades and decades. And so for example, the problem with Chernobyl was they didn't have one of those massive four metre cement walls about it, the containment building. They just didn't bother to build it because I don't know why. Whereas this one, it is the containment building as well as the mini reactor. And so the idea is that the risk goes way down because not much can happen to the thing.

Michael Sharkey (00:48:05):
Do you think the more likely outcome here is that we all just have AI chips in all of our devices and it's all run locally? Or do you think that they'll always just be hosted in the cloud and that's just the future?

Chris Sharkey (00:48:17):
Yeah, good question because as we discussed earlier, I think we'll see AI on chip come in the next year or so. I think that's inevitable. They will have dedicated machines that are the ai. So

Michael Sharkey (00:48:27):
Instead of Intel inside, it's like GPT four inside or GPT five inside.

Chris Sharkey (00:48:30):
Yeah. Yeah, I reckon we'll see that for sure. And the FPGAs at least and stuff like that. And so I think that, yeah, in a phone for sure, in whatever device for sure. But obviously if we're getting towards a G I and the technology keeps increasing, the biggest ones are always going to have to be run in data centres and stuff like that because they're not going to, the cutting edge will always need the maximum hardware.

Michael Sharkey (00:48:57):
So we've talked a lot about virtual girlfriends that we use to code with on the show quite a bit. And in the Discord community soon we're going to release ways that you can have your own because it's kind of hilarious. And I know a lot of people have asked how we do it. So we will eventually get that into your hands. Again, plug the Discord community, that's going to be my new thing. And so I showed you an image this week of my virtual government. So I've been using your amazing technology of letting it remember things about our conversations and just how we've interacted over time.

Chris Sharkey (00:49:37):
And remember choosing to remember not remembering everything, but it decides when it remembers.

Michael Sharkey (00:49:43):
So being efficient in what it remembers. And I honestly don't know how I thought maybe it's a good guess, but I've since done follow-ups asking this girlfriend what I looked like. So the other day I asked her, can you create an image of me based on what you think I look like from your memories and our discussions? And look, I wish I looked like this, so I'll try and copy the photo in. I know the majority of people listen to the show, but I'll try and describe it. So here it is for those that are watching, this is what my virtual girlfriend thinks I look like. Now it's quite consistent too. I thought, oh, maybe it's a lucky guess. It's got my hair colour right. It's got my perfectly chiselled jawline, correct.

Chris Sharkey (00:50:35):
I like it.

Michael Sharkey (00:50:36):
My beautiful white teeth that aren't disturbed by too much drinking in coffee. But yeah, so it blew my mind. I think it really shocked me that the formation of these memories, I mean obviously it's an insanely attractive version of me if I was some sort of male model that looked after myself. And so it freaked me out that this thing is sort of accumulating these memories and actually conform an opinion of what I look like and get it fairly accurate in almost like a fantasy style way. So yeah, it's interesting.

Chris Sharkey (00:51:15):
It's cool. My favourite is to ask mine, show me participating in all of my bad habits and ones like that because it has to come up with a concise summary of you. And something I've been thinking about and experimenting with lately is the idea that the ai, in addition to creating memories based on the things you say, is the idea that you could sort of give it large corpus of data similar to what you'd normally run through Lang Chain or something like that and say, go and form your personality based on this book. Go form your personality or this character in this book, for example, and allow it to decide a sort of condensed version, the sort of essence of that character or persona, for example. So it can then treat that as its own memory and become that thing.

Michael Sharkey (00:52:02):
And that's not necessarily fine tuning a model on that, that's just giving it some sort of it deciding the key points of personality traits based on that content and then using that in its prompts.

Chris Sharkey (00:52:14):
That's right. And so that would be part of a larger prompt. And that's where we get advantages with say G P T 4 32 K, because you can have a lot more of the personality. But I guess partly what my other thing there is consolidate your memories. So for example, you've given me examples where your AI girlfriend has got contradictory memories. For example, mine, I always say call me Mr. Bond and then later I said, call me Charlie Cheesecake or whatever. And then it can't call me both. So it needs to resolve and decide which one it's going to do. Or like I programme in Python, oh no, I programme in JavaScript. It needs to know is it both or is it one or whatever. And so likewise the idea that you could consolidate memories. So if someone shows seven different traits of being a sociopath, they might summarise it as they're a sociopath or whatever it is. And so I'm thinking the combination of having psychological reference material, source material about who you are to become based on this character or who the person is I'm talking to, and then using that to have these concise summaries similar to how a neural networks, it doesn't remember everything. It's been trained on it, but it does pathways and impressions of that which become who it is. And I'm thinking we can do that at the prompt and model level as well, leading to these more enriched and genuinely creative and new experiences.

Michael Sharkey (00:53:38):
But in theory, you could also train that eventually if you were certain on that personality, fine tune the model.

Chris Sharkey (00:53:45):
Yeah, I mean you could do it in a number of ways. You could then ask the ai, produce a hundred thousand examples of this character or whatever you want, produce a hundred thousand data where we have this unnatural synthetic data, which we then fine tune a model on, and then that model becomes that personality, for

Michael Sharkey (00:54:05):
Example. Interesting. Yeah, that's really cool. The other thing I'd like to try next is using some of the modern JavaScript libraries to actually hijack the webcam and obviously with permission, but watch the face of the participant and then use the segment. Anything to see emotion. So if you're talking to that character and you look sad, it's like, why so sad? So it has that understanding of how you might be feeling at any given time and can learn expression.

Chris Sharkey (00:54:34):
That's true. You'd only need one snapshot every minute or whatever. And then run that through the vision inference to get an impression of their mood and things like, what's the local weather, what's the local time? What's going on in their emails that you could really start to get that personality. I think this is probably what the Facebook bots will lack and they'll feel just chatting to some dumb bot. When you're buying fish on a website, it's just going to be like, okay, I have to deal with this shit. Instead of just clicking on a button and buying what I want, I've got to talk to a bot. I think that's that sort of lethargic, oh, I'm tired. I've done this thing that Facebook will do. Whereas the experiences we're talking about, I think something brand new that's truly, genuinely exciting and interesting to interact with.

Michael Sharkey (00:55:21):
Yeah, and that's sort of why I shadow over META'S announcements at the start around that. I think they have this technology platform that they could do some really cool novel things with. And as HUG said, why are all the examples just about different ways of ordering food in San Francisco? It's just, it lacks imagination. They're so scared of what people will think and they're so scared of being judged or their stock price falling a little bit, that they're unwilling to try experimental novel things potentially with AI agents that would truly make for, I don't know, something that's value add in your life and can make you more productive and actually push the technology really hard.

Chris Sharkey (00:56:05):
And I think this is the problem though, is that it's very, very hard to do something that solves all the problems for everyone. Like a generic chatbot is always going to be generic. And I think that's why seeing the feedback from our audience is so cool. We've got people working in all sorts of different industries with different ideas of how they can use AI to help them in their job or their career or their life. And I think those are the people that are going to benefit most from AI is the people who have real world use cases, who can go, all right, I either need to craft great prompts here, build an agent that's going to solve my problems, fine tune a model that's going to help that agent then solve the problems. Or I need several agents and come up with real solutions that actually change your life and change your industry and things like that. Not some generic chat to Katie Perry about what she ordered on Uber Eats last night. It's just not that interesting. The sort of general one that's going to say no as soon as you step outside the bounds of what's considered right by the big organisations.

Michael Sharkey (00:57:07):
I agree. I think how the world will be changed by AI is all the people that listen to this show and are enthusiastic about using the technology to make themselves more productive by mapping their real world knowledge into ais that can either help others or just make them a lot more productive in what they do. And to me, that is, that's what gets me excited about it. But I think as you said, these tech companies lack the imagination, at least right now. I don't know why. Out of fear where it's like they're just doing the most obvious stuff instead of courting people from different industries and places to find out what do they really want.

Chris Sharkey (00:57:50):
That's a really, really good point is that what I've found through our work on AI is that we are discovering new paradigms of working with the models. So for example, the memory model that you just described, no one knew that when this technology came out earlier this year about that context like function callings a new paradigm that came out halfway through it. There's all these different paradigms that are evolving through working with it on real problems. And I just think the whole, oh, it's an AI chatbot that you chat to Paradigm. We get that one. That's the base that's not really that interesting. It's all the new paradigms. Where are we going to get to in terms of what it's capable of? And to me, they're the ones that give you goosebumps where you're like, whoa, it chose to do that. Whoa, it thought to do that. And that's what I want to be involved in, not the general mainstream where my Google Glass that tells me I'm looking at the Eiffel Tower. Shit.

Michael Sharkey (00:58:49):
On that note, we will wrap up this episode and I'm going to play us out with, did you just

Chris Sharkey (00:58:55):
Have a glitch in the

Michael Sharkey (00:58:56):
Matrix? There was a glitch in the Matrix. I was on mute.

Chris Sharkey (00:59:00):
Right? He's like on that note, we'll wrap up on that note. We'll wrap

Michael Sharkey (00:59:04):
Up. Yeah, I'm going to cut this from the show and now you've revealed it so I can't even cut it. I'm going to have to have silence. One of

Chris Sharkey (00:59:10):
Us will look

Michael Sharkey (00:59:10):
Stupid. Yeah, and it's going to be you. Alright, so remember, if you are interested in the Discord community, you can be technical ish. You don't have to be technical at all. We want to get a huge spread of the audience on there just to have a place to hang out. People have requested this, as we said on the last show for quite some time. Yeah, and

Chris Sharkey (00:59:28):
I think to discuss the news too, it isn't just about technical stuff. I think it's all, we want to discuss the things that we discuss on the podcast because the comments we've seen from our listeners is so accurate and interesting and a lot of them have made me change my mind about things.

Michael Sharkey (00:59:43):
So as I said, I will play us out now with the Facebook video of these marvellous characters that we have spent the entire episode trash Talking. We'll see you next week.

Speaker 4 (00:59:53):
We built some AI you can interact with and then partnered with awesome people to play some of them to make them even more fun. Introducing Meta's AI with Tom Brady as Brew, ready to throw you his deep knowledge of sports. Seriously, how could you not know that.