This Day in AI Podcast

The future is unclear but entertainment is ensured. This week Mike & Chris discuss Apple's new AR headset, OpenAI's dominance and India's A.I. ambitions. A 1964 prediction of superintelligent machines proves worries never change. Place your bets, grab your headset - the race to build our robot overlords is on!

Please consider sharing this podcast, subscribing and leaving us a review. We appreciate your support!

Chapters
----
00:00 - A prediction from 1964
00:20 - WWDC, Apple Vision Pro & Don't Say AI
07:53 - VR & AI as platforms for the future
17:58 - Gambling BOT Won & AI Reasoning + Untapped abilities.
40:00 - Sam Altman Vs India: OpenAI & Real Competition
1:00:02 - Is an AI Future Dystopia Inevitable? 

Sources
----
https://twitter.com/_akhaliq/status/1666505321005588489/photo/1
https://mashable.com/article/apple-avoids-ai-wwdc-2023
https://www.news18.com/viral/apple-vision-pro-headset-sad-father-daughter-photo-twitter-meme-8017129.html
https://twitter.com/jeffclune/status/1664618665160085505?s=46&t=uXHUN4Glah4CaV-g2czc6Q
https://twitter.com/johnjnay/status/1666835733456273409?s=46&t=uXHUN4Glah4CaV-g2czc6Q
https://arxiv.org/pdf/2210.03629.pdf
https://www.runpod.io/
https://twitter.com/abacaj/status/1666813744842960898?s=46&t=uXHUN4Glah4CaV-g2czc6Q
https://venturebeat.com/ai/senators-send-letter-questioning-mark-zuckerberg-over-metas-llama-leak/
https://www.bloomberg.com/news/articles/2023-05-24/amazon-s-answer-to-chatgpt-seen-as-incomplete-vaporware#xj4y7vzkg
https://twitter.com/cambaughn/status/1664819332050722816?s=46&t=uXHUN4Glah4CaV-g2czc6Q
https://twitter.com/LangChainAI/status/1664301150760497154
https://twitter.com/JonErlichman/status/1665139140524900352?s=20

What is This Day in AI Podcast?

This Day in AI Podcast is a podcast all about AI. It's an hour-long conversation on the influence and rise of AI in technology and society. Hosted by Michael and Chris Sharkey.

BBC Presenter (00:00:00):
They will start to think and eventually they will completely outthink their makers. Is this depressing? I don't see why it should be. We superseded the cro, Magna and Neanderthal men, and we presume we are an improvement. I think we should regard it as a privilege to be stepping stones to hire things.

Michael Sharkey (00:00:20):
So Chris, worldwide developer conference from Apple has occurred. I don't think anyone ever says it as worldwide developer conference, but there you go. WW d c happened. And of course the whole conference was about generative ai . Just, just kidding. Literally they didn't even use the AI word once it, and, and this being Apple, it was clearly crafted to intentionally not use the ai Bad word.

Chris Sharkey (00:00:49):
Yeah, exactly. I guess they don't have a play in that space really. And so they don't want the conversation. Switching to that,

Michael Sharkey (00:00:56):
I, I don't know if it if it was that it seemed like they intentionally just didn't say it to start to position themselves as the, you know, around this whole privacy angle that they've had for years, which is, we're gonna not say it because people are scared of it. And so we're just gonna protect you from this big bad world of potential threats from ai. That's maybe, I think the angle they're gonna play here.

Chris Sharkey (00:01:22):
Yeah. And the few things they do have are like running local models on the phone and not like automatically sending everything on your phone to the cloud for processing and that kind of thing.

Michael Sharkey (00:01:31):
We'll get to the whole Vision Pro thing in a moment, but I thought this, uh, this like little meme here from Twitter, which I'll read out for listeners. What did you do with generative ai? Microsoft, we built a co-pilot for the web Google. We created Bard, apple, we fixed auto correction. .

Chris Sharkey (00:01:49):
Yeah, . Yeah. It's funny, isn't it? It's so, it's so simplistic what they're doing. So

Michael Sharkey (00:01:54):
For those that aren't aware, they are are using a large language model or, or a language model on the phone to run the, the machine learning so that your autocorrect starts to personalise to use. So it's gonna be the end days for ducking hell, which everyone is more than familiar with from Apple phones. Yeah. And

Chris Sharkey (00:02:14):
Like as, as simplistic as it is, I mean I haven't used an iPhone for over 10 years, but those kind of things get real. I mean, it's your day-to-day life. They get really frustrating. So I think they, you know, the, them addressing those incremental things with the tech where they can is, is nice. It's cool. They're

Michael Sharkey (00:02:28):
Autocorrect is notoriously bad though. So I think if this is the main thing to come out of ai, not the death of all humans or extinction of mankind having better autocorrect on your phone, I is a good innovation. Yeah. And

Chris Sharkey (00:02:41):
I think Apple sort of has a way of applying technology. They do it a little later than others, but they do it better and they do it classy and, and all that sort of stuff. It's very polished. So I can, I can see why they're not just getting into the wild west of AI just immediately and all in

Michael Sharkey (00:02:57):
It is interesting too, cuz we speculated they'll build some improvements into Siri. Maybe Siri's gonna get some better features or just improved to actually being a useful assistant. But the only innovation we saw was that they've just changed it from Hey Siri to Siri Now when you want to signal this, the, the, the Siri thing. So

Chris Sharkey (00:03:18):
Yeah, I mean, as soon anytime I get a new Mac or whatever, the first thing I do is disable Siri. It's gotta be the most useless and annoying thing ever invented.

Michael Sharkey (00:03:26):
Yeah. And I saw this week Microsoft are actually killing finally Cortana, their virtual assistant for, for the new co-pilot as well. So it does show, I think the days of those assistants are over. But of course the big story out of W W D C was this, you know, strapping, uh, the, the, what do they even call it? Apple Vision Pro to your face mm-hmm. and entering the augmented reality world, uh, by Apple. And it was interesting because we heard rumours early on and we even speculated on this podcast about the fact maybe they were gonna use generative AI to create new worlds that we would live in, and we had all of this amazing dreams to the product. And then it turns out it's like, you know, you can sit on the couch alone watching a movie or watch your kids and create a 3D memory of a moment instead of living in the moment

Chris Sharkey (00:04:19):
. Yeah. It's like, don't worry about the actual moment, let's just remember it. It's like, yeah, it's like anytime you go to like a kid's performance at school, it's just now it's just everyone holding up their phones for the entire performance. And like, even when they're like, please don't know, no filming, and then everyone just films the entire thing. I think the smart thing is just let someone else film it and ask them for a copy of their video.

Michael Sharkey (00:04:38):
Yeah. I find myself doing it too. You have that urge now to just record everything because you're like, oh, I need to watch this later or, or keep this memory. But then I never actually have, there's not enough human time for me to go back and just like sit and go through Google photos and watch every, uh, video or, or thing. And so I find myself not living in the moment as much. And I, I wanted to call out an article that that said, internet has discovered the saddest thing about Apple's Vision Pro headset and it's this, the father with the headset on watching his kids creating a 3D memory of, of this moment. And I'm

Chris Sharkey (00:05:16):
Like, it is a kind of dystopian awful image, isn't it? Like I'm surprised they included that for that exact reason.

Michael Sharkey (00:05:24):
Yeah. I think that it made me think about these two visions of the future around the two main trends in technology now. One being the, you know, the, all the hype and the the new news around AI and how it's gonna on one hand, you know, help save humanity and be great, uh, and solve all of our problems on the other it might, you know, kill us. We'll get to that a little bit later, , but then on the as

Chris Sharkey (00:05:49):
Usual

Michael Sharkey (00:05:49):
. Yeah. But then on the other side of the equation, which I, it's like, okay now like let's strap a computer to our head and, and it's,

Chris Sharkey (00:06:00):
It's just something so desperately unnatural about it, isn't there ? It really just doesn't seem right and I, I don't know, like it's a tricky one. I do think, you know, a lot of people have said for a long time, augmented reality's gonna do better because a lot of people find the fuller immersion one really sickening. They feel sick or disconcerted or for me, I feel paranoid like I have an Oculus and if I ever put that on, I only do it when I'm home alone and I know that no one's gonna like run up and attack me. Like I find it really stressful to use and I just can't see it being like a major part of the future.

Michael Sharkey (00:06:36):
Yeah, I, um, I mean for me personally, it and the idea of having a headset on and being able to create a huge monitor wherever I am and do my work, uh, through that, that means if it didn't create a lot of ice strain and it was, it, you know, it didn't like ruin your neck from the weight of the device. That's true. Well then I, I think I would be a, a big fan of, of using it in the, the work context. Uh, it would be so cool to just go into a room and have like an empty desk and you just slap on a headset and, and get to work. I think that would be pretty cool.

Chris Sharkey (00:07:10):
That is true. You're right. If they can deliver on that vision, it's good. Like, I've tried doing things like that on Oculus and the two issues I had were like getting eye focus, like in maintaining eye focus and the paranoia. Um, but yeah, I guess with augmented reality it's gonna be a bit easier to achieve. And the earlier reports on this thing, uh, that it's actually pretty good and actually worth the hypes. So I, I wouldn't doubt it from Apple either.

Michael Sharkey (00:07:33):
Yeah. But going back to that two vision of the future thing, it, it just feels like with chat G B T and how it's being used already by hundreds of millions of users worldwide to improve their productivity and eventually hopefully, you know, solve health issues and have all these great impacts for society and humanity. And that's like a really accessible view of the future. The ai uh, narrative right now feels like the invention of electricity to me. Like we've invented this new technology. This is a technology that should in theory through hopefully the open source community be in everyone's hands or everyone will benefit from it in, in certain ways. That's my hope. But on the other hand, there's another view of the future from Apple, which is immerse yourself, you know, have to spend three and a half thousand dollars us on this headset. I know the price will come down over

Chris Sharkey (00:08:24):
Time. That's like, oh, we can, we can quietly sit and watch a, a family movie at home for like 20,000 US dollars or something. Cuz each person's gonna need one .

Michael Sharkey (00:08:34):
But yeah, I, I was hopeful that this would actually be the first application of AI in a virtual world where you could slap your headset on and have a, you know, this AI agent sitting next to you in the virtual world working with you. This was my dream. Right? And it's like your

Chris Sharkey (00:08:50):
Assistant, well that's not, that's not to say that won't come because remember when Apple first released the iPhone, they didn't even have an app store. Um, so I think that, you know, if if, if it take, if it, if it takes off and they keep it and there's new generations doesn't dial like the touch bar, uh, then there will be those AI applications. Surely the people who have early access to this thing to develop on it are developing AI-based applications for it. They must be.

Michael Sharkey (00:09:16):
Imagine those slapping it on in your house and there's just these characters walking around like you create almost like fictitious characters in your life and roommates like you could truly live in this like augmented world where there's these like video game characters that you're working with. And I know one of our listeners, I, he's actually working on this like VR nostalgia simulator type thing where you can go back into this retro time of technology and be immersed in it. And someone like him could now with, with this headset create that world, deliver that as an app and have these characters from the past. Like he could sort of go back to the nineties, like holodeck style thanks to a combination of this headset plus AI and

Chris Sharkey (00:09:56):
The AI Yeah. And have that sort of dynamic environment you've spoken about in the past where the actions you take in the fake world actually have real consequences that have knock-on effects and memories and those kind of things. It isn't just NPCs that are pre-programmed in a certain way. Yeah.

Michael Sharkey (00:10:13):
And you can go in and like screw with that world, like sort of being grand theft. I still think all of these visions of that potential future of being the, the entertainment value of AI plus this immersion of AR and VR is, is gonna be a winner.

Chris Sharkey (00:10:28):
Yeah. And then the other thing I've always had the vision ever since Google Glass came out is this idea that the augmented reality combined with some sort of database at the time and now with the AI lookup is, you know, I see someone, I've forgotten their name, I've forgotten their kids' names, I've forgotten all this stuff about them, and then I just get a heads up display of, oh, this is the person last time you saw them, you promised you'd do this and you haven't done it, or whatever it is. And you know, and you've got all this context about the world around you. It's like, oh, you, you know that planet you're looking at in the sky tonight, that's Mars. Mars is, you know, gonna be over our our planet for the next little while. Jupiter's that one next to it. You know, there's, there's certain applications for it where you'd be like, this would be truly exciting and okay, AI probably isn't necessary for all of it, but like a lot of it as we've seen, can be really enhanced by its own evaluations of what you are seeing. And we know that the vision capabilities are there in AI now to do it too. I

Michael Sharkey (00:11:23):
Just don't get why Meta is not already just racing. Maybe they are racing off, they're still working

Chris Sharkey (00:11:28):
On the legs, Mike.

Michael Sharkey (00:11:29):
Yeah. They still haven't figured out the legs yet. Why aren't they, why aren't they racing off now? And, and when you go into the lobby in, in meta creating all these like virtual AI driven characters so that it feels like there's a lot of people in there cuz you know, like no one really uses it that much, but I, I think that would be one way they can get a head start here is just go all in on, on the AI applications in their metaverse to sort of beat Apple to the punch because it's gonna be a couple of years before, I think they're selling a huge amount of these units when I can go and buy the, the Quest three or whatever they call it now, that new AR one for. But I mean, do

Chris Sharkey (00:12:10):
You think Apple is the kind of company that wants to create that stuff? They're more about building the hardware and the, the operating system. They're not really ever making their own software.

Michael Sharkey (00:12:21):
No, but I think they're ecosystem of developers that are now looking at this technology and saying, well, you know, I want to access a big audience and, uh, I want to get my my app sold to as many people as possible. Um, uh, uh, you know, and now thinking about applications and if Meta courts some of them and says, hey,

Chris Sharkey (00:12:40):
Yeah, yeah.

Michael Sharkey (00:12:41):
Anyway, I I just think there's a lot of potential here for the future of entertainment. Like, it, it's inevitable if, if you can create these engaging worlds that you can be a part of and insert yourself into, this could become one of the most addictive technologies of our lifetime. Like,

Chris Sharkey (00:12:55):
Absolutely. And like, you know, you've, you've shown me throughout the week, people building these, you know, chat G B T or other l l m model based personal chatbots that take on a persona and people are paying to, to chat with like a virtual girlfriend or a virtual friend or just a tutor or whatever it happens to be. Those are immensely popular. Once that has a full interactivity and you can interact with like an AI bot that remembers you, remembers what you said, remember what you asked it about that I think that that sort of companionship and that sort of, um, relationship, even in an educational context could be highly valued by people. I'd use it like, you know, I've mentioned a few times I'm learning German. If I had that ability to sort of interact with a large language model that appears to me as a person who can speak to me and correct my language, I'd use it for sure.

Michael Sharkey (00:13:45):
Yeah, it seems like there's a lot of applications for this that, you know, I mean we probably can't fathom all of them yet, but there's some really obvious ones. And we did that whole special when we were speculating about this, this device from Apple that, you know, you would potentially have a virtual tutor like this is what I mean, even in the workplace, uh, I know everyone's trying to like order everyone back to the office now, but it seems that a device like this is the complete counter to that. Like you can be in the office, but you just slap on the headset. It could also destroy international travel for meetings because if you really feel a presence in the room.

Chris Sharkey (00:14:24):
Yes, that's a really good point. People who are travelling specifically to have meetings as long as the, you know, the internet's there, which it probably will be, I could see that you could get that level of personal contact, um, from it, uh, without actually having to travel there. You're right. I I think it's, there are positives to it. There's a lot of positives to it. And I like, even though I don't tend to use Apple products myself, you can definitely see that they get the technology right. And despite, I think it's only such a high cost because it must have to be because it's at the bleeding edge of the technology.

Michael Sharkey (00:14:57):
Yeah. I mean, you look at what they've packed into it, I don't, I don't really understand.

Chris Sharkey (00:15:01):
I think I'm cynical enough and inflation's high enough despite what they're trying to do with interest rates that people are gonna buy 'em like mad. Like I don't think the the cost is gonna offput anyone. I think people can afford it and will buy it.

Michael Sharkey (00:15:13):
Yeah. I I, I I agree. I I don't think the prices never, I mean, they sell a, a monitor stand for a thousand dollars and people buy it , like they're fine. They're gonna be fine. Yeah. Okay. So to sort of wrap up our, our amazing WW d c coverage here, I thought this

Chris Sharkey (00:15:30):
Was, it was pivoting the podcast to be about.

Michael Sharkey (00:15:32):
Yeah. Apparently we're an Apple podcast now, but I, I do think there's a lot of overlap and a lot of themes intersecting here that are worth discussing. It'll intersect.

Chris Sharkey (00:15:39):
I think that's what we're really saying here is that while they may not have an AI play in this current announcement that they're going to have a horse in this race, if it's not them, it's going to be their developer community who are gonna, you know, bring that technology into their, their new headset.

Michael Sharkey (00:15:56):
I'd be curious too, if any developers are listening to this podcast and, uh, want to reach out to us and share, I, um, maybe they, you don't wanna share, but if you are working on this or you, if you're interested in this as a platform I'd, I'd be really curious to know. Um,

Chris Sharkey (00:16:09):
Yeah, me too. It's always nice to hear what people are working on.

Michael Sharkey (00:16:12):
So the, there was an article I, Mashable, I just wanted to call out this quote from a Apple avoids AI at ww d c uh, and it said Fedi made this point, uh, subtly not on stage, but on the page in a post-game interview with Fast Company. He raised the spectre of malign AI driven deep fakes. Your loved one supposedly calling you because they forgot their password, for example. We want to do everything we can to make sure that we're flagging deep fake threats in the future. Federer said. So they definitely are, as I said earlier, going with this angle of, you know, of like, we're gonna protect you from the AI demons form.

Chris Sharkey (00:16:52):
I think it's the classic thing though. It's, it's almost like, you know, that thing with your reputation, once your reputation's broken, you can't get it back. So they're probably just by, I mean, they're the richest company in the world or close to it. I think they're just, you know, being careful and not, not taking some unnecessary risk in a market that really doesn't know what it is yet.

Michael Sharkey (00:17:12):
Yeah. And it's typical Apple, right? They'll wait on the sidelines, watch this play out, see where it's going, and maybe in two years we'll get some sort of, you know, breakthrough product. Or maybe it is the developer community that just turns the iPhone and this new vision pro headset into the platforms of the future for, for AI before, uh, you know, robots come and inevitably kill us all. Hopefully first stack my dishwasher, then maybe death. But I just, I really wanna live in a time when that happens. If

Chris Sharkey (00:17:42):
There's someone who has a broken dishwasher at the moment, I welcome the AI overlords to come and do my dishes. I'm always open to that. Yep.

Michael Sharkey (00:17:49):
If they gave us like a 20 year horizon, it would be, it would be fine. Alright. Speaking of the future, uh, we have, we have a very important update to share with listeners.

Chris Sharkey (00:18:01):
Yes. The update is that we're quitting the podcast because we're rich from gambling . Yeah.

Michael Sharkey (00:18:05):
So we haven't mentioned it in a while. And for those people new to the show, we, uh, early on we were playing around with two really stupid use cases for ai. The first was trying to build, uh, AGI with all the tools that were available today, which I know Baby Agi and a few other of these things came out, uh, and weren't that great, but, you know, we were, we were playing around with that early on. And the second was, can we stack the odds in our favour by using the AI to enhance, uh, our ability to gamble on certain events, uh, in order to win a lot of money? And, and it, it's significant odds. And I want to read you a transcript from a chat Chris and I had recently, uh, about a particular bet. So let me, I'll read this to you and then I'll, I'll give you the kind of backstory of where we're at with this. Yeah. Uh, so me, did you win Chris? It's running now. It's coming second me. If this works, we should become professional gamblers. Chris l Yeah, Chris, it's in the lead now. Still a bit to go me. [inaudible] me. This is not Chris photo. Finish me. [inaudible] Chris, it came in second me. [inaudible]

Chris Sharkey (00:19:21):
This reflects very well enough

Michael Sharkey (00:19:23):
, so Yeah, obviously, uh, I'm trying to censor the hell outta this, but we, we didn't win the first race and this is the first time we've actually tried to bet on horse racing and

Chris Sharkey (00:19:35):
Yeah. And so the advantage of horse racing, just everyone knows this. It's on all the time. Yeah.

Michael Sharkey (00:19:39):
It's

Chris Sharkey (00:19:39):
Literally something. I was doing it, I was trialling it on sports before like a f l matches and things like that, but they're only on, on the weekends and I'm busy on the weekends. I don't have time to be sitting around running AI models and gambling. So, um, but horses are on all the time. It's fantastic.

Michael Sharkey (00:19:55):
Yeah. And so by no means we are endorsing gambling. In fact, in Australia there's a huge gambling problem. Gambling is readily legal. I think were the biggest gamblers in the world, and I honestly hope one day they, they banned most of it. But if we can kind of screw over some of these gambling companies by stacking the odds in the people's favour and letting them win money, we thought, well, this is a good use case of ai. So we didn't win the first race, but never fear, uh, we, we weren't gonna give up that easily. So we, we

Chris Sharkey (00:20:25):
Wouldn't, we wouldn't raise it if we'd lost, would we

Michael Sharkey (00:20:28):
? Yeah, of course not typical gamblers.

Chris Sharkey (00:20:31):
Yeah, exactly.

Michael Sharkey (00:20:32):
And so the, the end outcome here is that, uh, we, we started getting the AI to pick something quite difficult, which is the, the first four horses, is that right? The first four horses in, or the first

Chris Sharkey (00:20:45):
Four horses in the race, but not in order. So you don't worry about the order so much, like who are gonna be the top three or the top four horses basically.

Michael Sharkey (00:20:53):
Yeah. Which is, which is extremely hard to do, obviously. And so like the odd the odds on it are, are are pretty damn big. And so here is what happened, . So we, we, so for context, we put this, this bet on in the afternoon, I think it was near, near dinnertime. And I went and had dinner with my family and then Chris messaged me saying that, uh, cuz he put on a, a different type of bet that, that uh, you know, we lost. I just bet

Chris Sharkey (00:21:20):
On the, I just bet on the winner. Yeah,

Michael Sharkey (00:21:21):
Yeah. But I went to, I

Chris Sharkey (00:21:22):
Didn't know, I didn't know how to bet on the, the first floor or whatever. It's too complicated, but yeah.

Michael Sharkey (00:21:27):
Yeah. They don't make it easy. Anyway, so here is what happened. So I, I put this bet on it cost me $24. I think that's like the minimum bet for this kind of structured, uh, bet. And I load up my phone and I turn $24 into $1,278. All thanks to AI stacking the odds in, in favour. And to be clear, I think I had, I think I had $40 in my account from previous smaller bets that we had won through AI as well. So it's by no means have I, I'm not down money or I haven't actually lost anything. I'm just up now by about $1,300. And so can Chris, can you explain to everyone listening how this works? Why AI can potentially give you an edge in, in things like this? I think, yeah,

Chris Sharkey (00:22:18):
And so I, I do want to go through it because I wanna make it clear I'm not at all claiming that large language models can predict the future or anything like that. Nothing can. Um, and what I think are its advantages are, it's not magic, but what it is able to do is take into account a lot of information at once. And to be clear, to do this, we're using Anthros 100 klor model.

Michael Sharkey (00:22:43):
You just got us banned, but that go on .

Chris Sharkey (00:22:46):
Oh yeah, exactly. I mean like, I hope, I hope not because yeah, like I really, really like that model and we're using it for legitimate things as well. This is just experimentation. Um, please overlook, please,

Michael Sharkey (00:22:57):
Please, philanthropic.

Chris Sharkey (00:22:58):
Um, and, but however, what's amazing about it is you can give it the full, full history of every race that every horse has been in and like, oh, I've never really been in the horse racing racing community. But these guys, they store a lot of information, like everything you could possibly know about a horse, everything it's ever done, the kind of gear it's got on its head, who the jockey is, the jockey trainer, whatever combination. It's a lot of information. And my real assessment here is that the AI isn't really doing anything magic, but what it is doing is similar to what a human would do if you could not just have access. Well, I mean everyone has access to the information, it's free online, but like sort of take it all into your mind at once and think about, oh, how does this factor affect this factor?

(00:23:44):
And does the jockey, whatever rider combination in here, ha have they written before? Have they written at this distance? Have they ridden at this distance with this jockey, with this trainer? Have they written in this weather? Cuz it takes into account like if it's a wet track and all this sort of stuff. So there's just so many factors that you have to take into account. And I know there's, there's racing people who do this themselves, you know, with their own human brains, but this is being done in a matter of 30 seconds. Um, just by literally giving it a a U R l where it scrapes all the information of all the, the horse history and doing it. And um, one really, really interesting thing that it does consistently is it, I don't give it any odds information. So I don't tell it who's the favourite any of that.

(00:24:31):
And yet it very often will, will pick the favourite. Like it'll say, this is the horse that's going to win. And this is part of the problem with the system, right? The favourite doesn't always win, but it's picking one that's a dollar 50 to one, so it's not really worth, you know, the juice isn't worth the squeeze kind of thing. And I think the next evolution of it is for it to do these ones we're talking about where it's picking multiple combinations and ones where the payoff is so big that you only have to be right some of the time. But the other interesting thing that, that we noticed through, through trying this out, and we've only tried it a couple of times, is that often in these, in the information you give it, there's sort of like the, the horse racing experts assessment. Like, oh, you know, this horse, she's coming off a four week break and, you know, uh, one, uh, was slow here last time, you know, one to keep safe or prefer others and all this, this jargon that they use in the horse racing community, if you include that in the prompt, um, then the model gets really lazy and it'll just use that as its own assessment.

(00:25:34):
It'll just completely bypass the whole thinking process and just tell you that and regurgitate it and it's really bad. And so what I've noticed from doing it is you actually have to sort of give it the raw information with no biases even in the way you do the question to get a sort of more accurate response from it.

Michael Sharkey (00:25:52):
Yeah, and this is one thing we've talked about on the show before, and it's been covered in, in a paper I'm sure we read in the past around the attention problem of these models where you just, you can't really keep its attention focused and it's very easily influenced when you put bias in you. Actually, it was only a couple of weeks ago, we talked about this. You can easily introduce bias into the model by, I think the answer is this. Uh, and that will bias it towards that outcome more likely than not. And so we encountered that problem directly when trying to make this, uh, this app find the right combination of winners.

Chris Sharkey (00:26:32):
But what ab what absolutely blows me away is I've experienced in other things, you know, just experiments I've done with larger models, larger context windows and that kind of thing is can it remember some arbitrary instruction from somewhere in it when there's all this other information? And what's remarkable about it is its ability to make assessments and you can ask it questions interactively after it's made its assessment to go, well why did you pick these horses? Like what was it about them that made you think that they would win? And you know, what else could happen? You know, do you think there are other possibilities? And it's, it's able to, to really make those assessments quite quickly and quite, um, you know, without missing anything. It doesn't forget things in the context window. I mean, I know that's obvious and I know that's what it's meant to do.

(00:27:18):
I shouldn't be surprised by it. But when you're actually doing it, it just, it just has this feeling like, wow, that is really, really remarkable. And it, it keeps bringing me back to, we're not quite at the level where we know prescriptively or scientifically I suppose, how to get the prompts to do a certain thing. Like there's a certain incantation to it, you've gotta work with it and get it right. Small things, like for example, that bias we just spoke about can throw it off. Whereas if you can sort of massage the data you're giving it, um, it it, it, it gives really good results.

Michael Sharkey (00:27:54):
But it does speak to the need to tune the ai, right? Like, uh, sort of ma like, I think one something it exposed for me is these large language models are so powerful at becoming almost domain experts in, in the fact that it can become essentially this expert in horse racing when given the right prompts and context and and manoeuvred the right way.

Chris Sharkey (00:28:18):
I mean, that day we had a win in every race except for one Yeah. That we bet on. And I think it was about six in a row and only one lost and it came second, which was the one you you quoted the tra chat transcript from. Like, and it's, and it's only our feeble human brains who get greed and whatever and it's like, well I'm not just gonna bet on one, I'm gonna bet on four. Um, that, that do it. Like if you just simply let it pick its number one pick each time it would be well and truly up and you could automate that. And I think that it's very, very interesting because I think the next layer, and we're seeing this a lot in all the releases in the sort of AI community this week, it's a lot of this instruction planning related stuff.

(00:28:59):
Everyone's realising that you're going to need supervisory agents that are making decisions on how to coordinate the worker agents in what they're doing. So, you know, to, to give a specific example here with the gambling one, it's not enough for us to just point it at a race and say, what do you reckon? And then we go off what it thinks and then go place the bets. You need another thing that then assesses that information and goes, okay, based on this model and it's, you know, it's history of success and weaknesses, I reckon we stay off this race. Cuz I reckon it's not sure or based on its history, I think, you know, we go for the the first four or we just go for the straight winner because I think that's what we need. And I think that extra layer on top of it is what is needed. That extra bit of human thinking that we're applying to it, it's definitely capable of doing that, but I think we still feel that it, it's needed for us to make the final assessment there.

Michael Sharkey (00:29:50):
Yeah. So it's definitely still AI plus human, but what you are saying is that, and I think, you know, the entire AI community's discovering this at the same time is that to have true agency where you have, uh, an AI that can act as an agent and make decisions and manage the budget of the gambling to make sure you're always up and always place the right bet at, at this point in the technology, what we're talking about here is multiple agents working together cohesively towards a single goal as opposed to a single agent.

Chris Sharkey (00:30:25):
Yeah, exactly. And it could be the same model, it could be the same system underneath, but I guess different prompts like we'd be talking about or you know, we, we say agent, but it's really like a combination of models with a specific mission or objective. I think that's how I would describe it. And I think that I'm seeing increasingly things where people are having a planning agent that actually plans out, okay, these are the abilities you have, which ones am I going to use to accomplish this task and how and how will I evaluate the results from each? Because the other thing we're definitely seeing is like imprecise results, like you can, uh, put the same data into the same thing and sometimes it'll just go well off track and be totally wrong and other times it'll be incredibly accurate. So you need sort of this assessment that's like, okay, that was valid, that wasn't valid and, and those kind of things. And I think the planning and assessment kind of agents working together brings us a lot more closer to something we would consider intelligence, or at least giving us the benefits of intelligence than just straight up thinking you've s cracked the formula for something and you can just keep running it over and over again.

Michael Sharkey (00:31:31):
And this can be applied as well pretty easily to another form of gambling the stock market where, you know, you, you essentially the same kind of concept, right? Like learning everything there is about a bunch of companies in a, in a potential market, having different agents accounting for where you should place the bets if you should short, um, all of those different combinations, this seems like a pretty good use case or, or or a specialised product that,

Chris Sharkey (00:32:01):
Uh, look, I think so, and we've discussed this on previous podcasts about, you know, that's, that's coming down the pipe inevitably, um, you know, as, as we go forward because it, it makes total sense. Like just this ability to take in vast quantities of information and, and make assessments take into account historical information being given historical examples of where it worked and where it didn't. It's absolutely going to work in that market. Of course, it's a sort of game theory thing because at some point everybody's going to have access to these tools and they'll all be using them. So, uh, you know, that probably has to be taken into account as well. Like if you were the only one to have it for example, and you'd probably have a pretty significant advantage over other people, even just in the leverage of thinking, like you can as a human probably assess stocks better than, um, the AI can may maybe, maybe like potentially, especially if you've got experience, however it can do it on all of them, you know, like it can literally go through every single one and go, you know what, this one you're looking at, I know you've spent a lot of time researching it, but there's this one over here that's actually slightly better, um, that you should look at, but you didn't have time to to research it.

(00:33:08):
So I think that it's, it's a case where it's just leverage, it's just thinking leverage. It's like if, you know, people often say that about themselves when they're busy, like, oh, if only there could be 10 of me, if only there could be 20 of me, I'd get so much done because I know what I'm doing in this space and whatever, but I have limited time. The AI doesn't have limited time and has limited hardware maybe, but, or money. But essentially you can multiplex yourself and your way of thinking, uh, and even perhaps a better way of thinking and get that leverage to get better results.

Michael Sharkey (00:33:40):
Yeah, I think that multiplex point is so true. Like the, the world of agents is definitely coming where we're spending most of our time curating and crafting agents as opposed to actually doing the, like maybe the job replacement is just the, the form of agency of, of crafting these agents to perform tasks. It's gonna be interesting though to see what happens in the stock market. Like do investors become just super informed and you know, in the betting world that we're dealing with is like to these companies, you know, could they potentially go outta business? Like if two idiots like us on a podcast release this to everyone, what would happen? Who and

Chris Sharkey (00:34:19):
Who's and who's to say we won't? Yeah. And

Michael Sharkey (00:34:21):
Who's to say we mightn't be working on releasing it in the next couple of weeks to everyone and then maybe these gambling companies might, you know, get a bit of a taste of what it's like to have the odds stacked against them. Yes.

Chris Sharkey (00:34:34):
Maybe. Yeah, exactly. I mean, horse, horse betting, if I understand correctly, is like a param market where you're really just betting against other people. I think the sports betting companies do find no matter what happens cuz they adjust their odds so they're getting equal money on different things. I'm no, I'm no expert on it. I think when you get to like sports betting and same game malts and stuff, that's where they would probably really, really not like it. But, um, you know, it's, it's hard to say and coincidentally I've been running it on baseball as well and I've got one going at the moment that seems to be going well. So, um, obviously I'll only report it if it wins. So , um, you'll, you'll hear about

Michael Sharkey (00:35:11):
It. Everyone will get access to this tool. Just you lose like so much money it, it could be the best, uh, endorsement for addictive gambling. That, and I think,

Chris Sharkey (00:35:19):
I think this is the, this is the point I want to make is you're still ultimately making the decision of what to do with the information it gives you, excuse me. And I think that, I think that's where we need to advance it to the next level, which is where it makes the decisions as well. And you are really just going, okay, this is like my, um, what do they call it, like high frequency trading bot in the stock market that you just algorithmically give a budget and allow it to go run an experiment for you. And I think that, you know, that's what we're trying to get to with this.

Michael Sharkey (00:35:48):
It's that whole the, you know, the funniest outcome is the most likely or something like, maybe what happens is we accidentally invent AGI i through the gambling, uh, gambling project and also

Chris Sharkey (00:35:58):
Give it a also give it a way to get itself resources to uh, to empower itself to take over the world.

Michael Sharkey (00:36:04):
And the, the AGI I event is just as drunken gambling AI , all it wants to do is bet on everything

Chris Sharkey (00:36:11):
. Yeah. Yeah. Just one more, just one more shot.

Michael Sharkey (00:36:14):
Anyways, we'll we'll keep you updated on this project. We haven't mentioned it in a couple of weeks cause we haven't had that that much exciting to share, but now it's working pretty reliably. We, we want to, our next step is start to gather some data. Like is this an anomaly? Like, you know, are we just getting lucky?

Chris Sharkey (00:36:28):
Yeah, I think, I think that's the thing. You don't, you don't wanna get ahead of yourself and say, cuz we had, you know, a little string of wins there on one day or a couple of days worth of wins. Is that really saying that you have something, it could be total flute, you know, and remember a lot of the time it was picking the favourite. So, you know, that isn't so profound. Anyone can bet on the favourite. It's just that it happened to whenever it picked the favourite at won, you know, so, um, favourites don't always win. So it's interesting and I think yeah, let's, let's keep reporting on it and um, ultimately I'd love to find a way to sort of share the results in advance in a way that can't be manipulated, like blockchain style. So it's like you are not just like I'm admitting to is just reporting now that we're winning. You know, like I think that it'd be nice to be able to say, here, this is what I'm going to do and here's what happened.

Michael Sharkey (00:37:14):
Yeah. I think having some like data collection around it where people have to log their bets using it and then we store that over time and we have a, a website maybe that shows statistically doesn't give you an edge or not by having thousands of people potentially using it would be

Chris Sharkey (00:37:31):
Super interesting. And I think I also think an agent that has memories where you can punish it for its failures, it's like, you really screwed up on this one mate. Like, you know, this, this has really affected my life. Don't get it wrong again, learn from this lesson. You know, like, look at what, look at all that. You've got all this information, you still got it wrong. You know, like, I really need you to do better . I think that kind of thing is really gonna help it. Ai love examples and I reckon if we give it examples of its failures, then it'll, it'll it'll need to improve.

Michael Sharkey (00:37:59):
Yeah. Again, not an endorsement for gambling. Gambling ruins so many people's lives, it's a horrible thing and I, I, I honestly deep down don't like it. It's

Chris Sharkey (00:38:06):
Like, and I'm definitely not saying that that like this, the gambling is the only application for it. What I like about it is it's, it's a thing where there's a definite result. You know, it's like, you, you, this is, this is your thought and you get an actual validation or invalidation of that thought. And I think that that's what's interesting to me about it and it sort of pushes the model to, you know, what it's what it's really capable of with reasoning and explaining its logic and all that. So I think

Michael Sharkey (00:38:32):
It's important why we originally settled on this is not only that, but also we were like, how can we use this technology to make insane amounts of money? Because if there's some sort of singularity, this is exactly what you would do if you had control of it, right? You would just go and create infinite wealth by exploiting every mechanism of this technology. And that's really how we landed to, to, to maybe try this out. But

Chris Sharkey (00:38:57):
Yeah, and as we know, if you ask the AI what it would do to do that, it would, it would create a disinformation campaign, the likes of which the world has never seen . So this is a sort of muted version of that. I think, uh, it wouldn't bother with money. It would, it would just,

Michael Sharkey (00:39:13):
Uh, the the final thing I'll say on it is it's the weirdest experience because I never bet on horse racing or rarely do. And to put in this like box finished combination by being told by the AI what to plug

Chris Sharkey (00:39:28):
In. It's kind of funny that like your second ever bet was like you got the first four horses in the race ride. Yeah. Like that's gotta race some red flags over it, whatever.

Michael Sharkey (00:39:35):
So we're being banned from Anthropic Sportsbet, uh, what else? ?

Chris Sharkey (00:39:41):
Yeah, exactly. That's right. That's why we need the podcasting.

Michael Sharkey (00:39:43):
If anyone from those companies is listening, please let this play out. It it, it's gonna be fun. Uh, I I assure you it's good promotion, really. Uh, and, uh, you know, let's keep the good times rolling. Alright, uh, , this is gonna end poorly. Uh, alright. I want to play a clip now, uh, from Sam Altman's World Tour of Self Importance. Continuing this time. Sam Altman takes on India. Let's listen to the cliff and then, and then we'll talk about it. You, you, sounds good.

Speaker 4 (00:40:15):
Sam, can you, uh, just go into, go into startups? I mean, as you know, we've got a very vibrant startup ecosystem in India, um, specifically focused on ai. Are there spaces where you see, let's say a startup from India building, you know, you can build on the models, you know, be it, uh, Chachi PT and many others, but if you want to build foundational models, how should we think about that? Where is it that a team from India, you know, three super smart engineers with, you know, not a hundred million, but let's say 10 million, could actually build something truly substantial?

Speaker 5 (00:40:43):
Look, the way this works is we're gonna tell you it's totally hopeless to compete with us on training foundation models. You shouldn't try and it's your job to like try anyway. And I believe both of those things , I I think it, I think it is pretty hopeless, but

Michael Sharkey (00:40:59):
So he says, I think it is pretty hopeless. My first interpretation of that clip was that it was somewhat sarcasm. Like, hahaha we're really good. Haha. But you should still try a hint in. But it seems like no one's really interpreted that way and everyone, it, it's almost come across as this arrogance, like, we're so far ahead, we're the best at this. And don't even try, you'll never catch up India. And I feel like India came on like .

Chris Sharkey (00:41:30):
Yeah. And I think India's a country that could do it. That's the thing. Um, yeah, I I I think it's one of those things where, you know, someone says it like, oh yeah, no, I'm just being sarcastic and joking. But I reckon he really thinks that, I think that they're like, this is our thing, please don't do our thing. Like, you know, you can use our staff and we, that's why I'm doing my world tour to encourage you to use our products, but don't do like you, you know, this is off limits. Only we can do the bedrock algorithm. Sorry, I shouldn't use Bedrock only. We can use the, what do they call 'em, base base models or whatever like that, you know, that's our territory. We own all the hardware, you know, we're in the pocket with the government now, um, you know, and we're gonna block anyone else from building them.

Michael Sharkey (00:42:10):
Yeah, it seemed really arrogant to me and the fact he originally set out on this tour to inspire people and learn about how people were using the technology, but if you don't fit into their sort of narrow lane of were the rock stars of this world, no one else can compete. Were the, were the, were the number one then, you know, maybe that was his true colours shining through. I, it, it's really, I've said it many times, it's really hard to, like this guy, I'm sure maybe in person, he's really charming and, and

Chris Sharkey (00:42:40):
Like, why say it like if you've got a, if you think you've got this, this sort of position, uh, unassailable position where no one can do it, and first of all, what company in history has ever been in that position and kept it, um, and you think you've got this position, why crush the hopes and dreams of these people who are really interested in the same thing you are and genuinely want to participate in it. It just, it just seems wrong.

Michael Sharkey (00:43:02):
Yeah. I really hope we're talking in a couple of years, maybe a year from now and saying India's release like India, G P T and it's like a billion times better .

Chris Sharkey (00:43:12):
I got the highest population in the world. They're really into tech,

Michael Sharkey (00:43:15):
Insanely smart people.

Chris Sharkey (00:43:18):
Yeah. I just, I just don't see why you would write off an entire country and just be like, guys don't bother. Yeah.

Michael Sharkey (00:43:24):
I don't know. It could have been sarcasm though. And he's just tired on his, I'm trying to give him the benefit of the doubt here. Yeah. And say that maybe, but it's definitely making waves. It's, it's trending right now on, on Twitter as we speak. And, uh, a lot of people are playing that clip, so probably not the PR that he wanted. Interestingly, last week we covered an article about the roadmap for G P T four that's on the previous episode. If you want to go back and find that. It's in the chapter index, um, on, on the podcast and on on YouTube if you wanna find it. But interestingly, I had to go back and find that article on, on way back when machine, because the blog, uh, outlining the details from this talk had been deleted. I later found out on Twitter and I said I'd investigate it on last week's show that it's alleged that OpenAI asked them to delete it because I think that they didn't want that information out there, but obviously they just drew a lot more attention to it and that was where they were talking about G B U, uh, availability.

(00:44:22):
And so it, it does sound like that one of the problems, and we know this is just access right now to the hardware in order to train these models and, and we know there's a hardware shortage because he said it, and you can see that through NVIDIA's stock.

Chris Sharkey (00:44:39):
Nvidia. Yes. Did

Michael Sharkey (00:44:39):
You like that ?

Chris Sharkey (00:44:41):
That's good. I prefer it pronounced that way. Yeah. Um, yeah, and I, I, I don't know, like I, I guess I haven't been involved directly in trying to train a massive model and like, you know, they cost a lot of money and we've definitely seen the models that can be trained on smaller amounts of hardware and less time that give, you know, 80% of the results and things like that. The question is how, how, yeah, how important is it to always have the absolute best one when we don't even know what you can get out of them? And also, you know, if you can train one that's 80% of G P T four, but you don't have the restrictions that they perhaps apply in terms of, you know, API cost, in terms of censorship, in terms of the things we spoke about last week with the alignment where, you know, this isn't aligned to the problem you're trying to solve, it's aligned to being a chatbot that appeals to the masses, you know, maybe it isn't necessary to, to have the largest one with the most parameters. That might not be the best thing. So I think that there's a, there is a certain arrogance there thinking, oh, well it's only if you control all of the hardware and you only are the one to, to do the biggest thing that you have the absolute advantage. Because I don't think all of the advantages of this AI revolution come from just being the best model at the current moment.

Michael Sharkey (00:45:57):
Yeah. And the question is, will we see a breakthrough where, just like the human brain, you know, you don't, you said this last week, you don't need to be trained on a million dog images to understand what a dog looks like. Mm-hmm. Maybe there's a different way of doing this that the, the models can learn much lot stuff.

Chris Sharkey (00:46:14):
Well, and I think that's, and I think that's what we're seeing in the community this week in particular is, and even even in our own experimentation, is this idea of agency and this idea of, of planning and you know, and the chain of thought reasoning. You know, we're seeing more papers come out about the how to best apply a chain of thought reasoning to get the best results from your AI models. And it's certainly the case where you see like a great example because it's visual is things like stable diffusion and mid journey and Dali, people who can craft really good prompts for them get miles better results than just casually doing it. Or if you use a large language model to create the prompts, for example, which I've experimented with, it creates so much better prompts for the things. So there's certainly work that needs to be done on applying the technology more than just making a bigger version of the thing.

(00:47:05):
You know, and it'd be like creating the first industrial printing press, but not really giving a shit about what you're printing in the newspapers like, or the books or whatever it is. Um, you know, there's, there's, there's domains within the technology that needs to be explored extensively that they're not going to be able to compete on. And those domains will exist whether or not you have the biggest model or not. So it doesn't just have to happen in the context of open ai. There's plenty of other ways that those domains can be explored, if you know what I mean.

Michael Sharkey (00:47:34):
Yeah. It, it, i I believe all of this, but the thing I'm still struggling with a little bit is to compare this to early era of search when you had a number of players like Alta Vista and, you know,

Chris Sharkey (00:47:47):
Uh, what was the dog pile? Was that one

Michael Sharkey (00:47:49):
Of em? Yeah, there was a, a tonne of search engines riot, and then Google was just such a clear winner because it had the best results and a minimal UI and blah blah blah. But it always stood out and it, it, it's consistently been better and then just became the winner. And I just can't help but think like, you know, Google's Palm two api, it seems like you can now get access to it. I haven't tried it, but from

Chris Sharkey (00:48:15):
What I've tried to get access and failed. Yeah. Yeah.

Michael Sharkey (00:48:18):
And then from what I've heard on that though, like this tweet, uh, I'll pull up and I'll link to it in the show notes. I tested out Google's Palm API pros. It's way faster than chat G B t almost three times as fast as Khan's response quality is awful in comparison. And then you've got Amazon Bedrock, which like no one can see, no one has access to seemingly, and I think it's still a bit of vaporware or, or terrible, which is why they haven't released it. So it's sort of like, on one hand we're saying the competition's coming, but we're all still mostly using either well open AI because it's available.

Chris Sharkey (00:48:55):
Well I, you know, we, I sing the praises of Claw a lot because um, we have access and I've used it for things and I really like it, but I've noticed in even our own comments on YouTube that people who want to try it perhaps cuz they've heard us talk about it or they just want to in general, can't, still can't get access. And I, you know, we've speculated that that's probably because of, um, uh, like lack of hardware. They can't, they can't scale it up, for example. The only reason I can think of, um, so yeah, like, I think that's the thing. It's like that it comes back to that real world thing where, you know, in theory competition exists, but you know, in practise does it, like, are people really building the first commercial open source model we saw was Falcon, which has only been out for a week or so, which you need at least an a 100 to run. Um, you know, is anyone building commercially on that? On a large scale? So, yeah, I guess at this stage, most of what we're seeing, like ai, people adding AI to their applications, it's all open ai. You'd have to think.

Michael Sharkey (00:49:59):
Yeah. And it, it, it, the, it sort of parallels to this Apple Vision Pro where everyone's like, oh, you know, the first iPhone, it was expensive, not very accessible. They had to ramp up manufacturing. Is that similar for, you know, is that similar for open ai where it's like, these guys have ramped up, it's readily available. It, it is operating at huge scale and then everyone else is trying to like, figure it out and, and they're like, the iPhone and the Android phones are, are coming. Maybe that's a better way to think about it. Yeah. That,

Chris Sharkey (00:50:31):
That, that is a good way to, that's exactly what I think about it. I think inevitably hardware catches up. It might take a little bit of time. There's also the thing I said where I don't think, you know, it's like we're squeezing lemons, but we're just doing it like with a little teaspoon or something, and we're not, you know, ha we don't have a juicer available. You know, I think that there's more to be got to, can I say that more to be gotten ? There's more to be got whatever I need AI to correctly. We've

Michael Sharkey (00:50:55):
Said so much stupid stuff on this show, I'm sure. Like whatever, Yeah,

Chris Sharkey (00:50:59):
Exactly. The ai, when the AI is transcribing it, it can figure that out for me. But, um, the, you know what I mean? I think there's more to be, I can't say it, that we can get more out of the existing models, like, you know, at, at lower hardware limits. And I think that, you know, limits inspire creativity. You know, if you are, if you've got, if you've got limits on what you can do, you will get more out of what you've gotten. And that's why you still, like, there's people out there still making games for Game Boy and Game Boy Advanced, right? And they make these amazing games getting more out of the hardware, the same hardware that they had back then. And I think constraints lead to what, uh, you know, good creativity and, and really interesting things. So I don't think we need to be constantly sitting around waiting for the next, like G P T five or, or whatever it is.

(00:51:45):
I think we need to take some time and explore what's here and, and what's capable. And, and that's what I see. Like when you look on Red Twitter hack and news, um, hugging face and all the different platforms where people are releasing things, um, there's some really, really interesting applications of, of technology. And you're seeing like different industries like education and medicine mobilising and starting to talk about ai. I read an article in the new, um, new, what is it? The, um, new England Journal of Medicine this week about, you know, renewed interest in applying AI to medical imaging, for example. And that's on the back of papers we've spoken about on here before. So I think that you're starting to see AI become pervasive in different industries where there was just a lack of like, you know, there's been waves of ai, but there hasn't been this depth of interest and sustained interest that we have now in a long time. And so I think that that's where, that's what I'm more excited about. And we've spoken about this before. There's so much good that can come from this technology in the short term before we get to our doomsday scenarios that, uh, and, and that that can be done with existing technology. So I'm excited to watch and sit on the sidelines and maybe, and be involved a bit of what comes of that.

Michael Sharkey (00:52:58):
I agree. The, we, we've said this, that there, there, it felt like there was this shift of like, where everyone was very overwhelmed. Everyone sort of got comfortable with that state of, you know, just constant answers. And then it's like, okay, now let's go build and see what we can do with this thing. And it, we talked, and there's a lot of papers this week about chain of thought reasoning and how to get this thing to reason better and stay track and not, uh, not lose track of the objective of what it's trying to do and evaluate its own thoughts and decisions that it's making. It does seem like by having these limits, you've gotta overcome challenges like this, which actually may long term improve that the state of AI in general, outside of just the obvious challenges like a GPU shortage.

Chris Sharkey (00:53:48):
Yeah. And bring all the technology around it. Like, one thing I've noticed in the open source community, it's getting increasingly easy to, to try things. You know, there's, there's UGA booger I think it's called, which is like, you know, a text interface on where you can select different models and run them. Then there's this, uh, site I found the other day called Run Pod, where you can just like spin up a GPU instance for like a few minutes just to try something out with pre-made templates for models and things like that. So it's, it's gone from being this thing where, oh, I want to get disabled diffusion running, that's gonna take me a whole day and, you know, I need the right hardware and all that. So being able to try things much more quickly and then, you know, someone else's released open source code that'll allow your agents to plan out what they're going to do, for example. So there's a lot of tooling and, and really nice stuff out there that makes this more accessible. And I, I say accessible as someone who has the, the technology skills to do it. But I, I guess what I'm saying is that, um, there will be time for all of that stuff to develop so more people can have access and applied in their, their respective industries.

Michael Sharkey (00:54:54):
Yeah. This idea though, that all developers are doomed and there won't be, you know, engineers in the future and they'll all be replaced. I mean, you know, I can't speak for the, the entire future, but at least in the next decade, I think there's just gonna be so many opportunities of wrangling this technology, trying to make it work. And, you know, being that human plus AI in this development process of how we get the, the maximum benefit of AI in society. And I think having all these open source contributors and tools is so important to the entire ecosystem to be able to get this technology everywhere we want it to be. And we, and into the areas of society, we know we need it, like medical breakthroughs and all these other, um, phenomenal areas. But this week we also saw something that did scare me back on the sort of regulation front, which is Senator send letter questioning Mark Zuckerberg over Meta's Lama League. And so ,

Chris Sharkey (00:55:57):
Imagine explaining that to someone in the 1980s.

Michael Sharkey (00:55:59):
Like, yeah, like it just as a title is so strange,

Chris Sharkey (00:56:03):
Maybe even now, like if you weren't into this stuff that sounds like a bunch of nonsenses,

Michael Sharkey (00:56:07):
It said, we are writing to request information on how your company assessed the risk of releasing Lama. What steps are taking to prevent the abuse of the model and how you are updating your policies and practises based on its unrestrained availability. It's

Chris Sharkey (00:56:21):
Abuse, it's abuse because it open ai. Otherwise it would be fine. How are they abusing it? By using it?

Michael Sharkey (00:56:28):
It seems, seems to me that the, the fear maybe here is that like other companies will use it for disinformation campaigns, how it leaked onto bit torrent, the waiting's leaked. And I, maybe what they're getting at is if more powerful models are developed that are a security risk to the United States and they just leak, uh, what that could mean for the, their, their overall security. Or it's just like, you know, we suggested prior to the recording of this episode, you know, Sam Altman has regulators in his back pocket now and he's like, you know, let's make Facebook the, the poster child of, of all the bad stuff of this movement.

Chris Sharkey (00:57:10):
Yeah. Like, I mean, that's one thing, and I suppose you're right, they must be thinking, I hope they don't do it again because it's, it's too late to get it back. Like, you can't stop it now. Um, but I think, and you know, allaying my fears from our very early podcast where I was like, download everything you can cause it'll be taken away. That fear of mine is gradually going away because I think that they're not, they're not getting rid of those waitings now, like that's out there everywhere. Um, and, but yeah, so they must be fearing Facebook's, you know, relative openness in the AI space is, is obviously a threat to someone. Uh, and they're trying to, to stop them from continuing down that path, which is a shame because what they've done is really spurned on the entire open source community. Really. That was the, the lama was the real genesis of it in, I mean, look, I could be ignorant to that. There was prob I'm sure there was other stuff going on, but it, it's where it really caught attention of mainstream people like me.

Michael Sharkey (00:58:06):
Yeah. It is still the most experimented on model from what I can ascertain from hugging face as well. You know, it's being used quite a lot still. So I think a lot of it, yeah, and it's

Chris Sharkey (00:58:19):
The foundation of other models, like people train it further and, and things like that. So it's not, you know, it's not just the raw lama thing. It's like people are, you know, uh, aligning it and all the other stuff they do to it,

Michael Sharkey (00:58:30):
All the, yeah. And all the papers that are coming out of meta and how they've got this open source approach. I actually don't think that should be restricted. I think it should be encouraged because the more pervasive these technologies are and everyone has access, the more, I think it's less likely to be a risk if everyone's got the same tools, uh, longer term. But in, in the short term, I just, I, I can't really see how it's that big of a risk right now. Uh, I mean, misinformations already, uh, a, a weapon. It, it, it has been through the history of time. Yeah, sure, this will amplify it and make it a bit scarier, but you've still gotta mitigate those risks anyway. Yeah.

Chris Sharkey (00:59:08):
It's funny cuz I, I shared with you my kids' school, the principal sent a newsletter around talking about Chachi PT in education and he was quoting Play-Doh, you know, saying that the invention of the alphabet and people writing down stories will cause everyone to not, you know, remember knowledge and not bother to remember stuff anymore. Cause it's, it's written down and, you know, and then people now worrying about, oh, well, like, you know, Wikipedia and chat g b t, uh, no one will need to learn anything anymore. But that's not how those things played out. You know, like written language led to probably people getting more knowledge cuz it could be more easily shared. It didn't require someone telling them it and memorization and, and those kind of things. So, you know, I think that the, it, it, it's adaptable, um, that, that kind of thing. And it, it probably is maybe not good, but it's not necessarily bad either.

Michael Sharkey (01:00:01):
So there was a, a video that went around on Twitter this week, which was from the bbc. It was a prediction about AI in the future in 1964. I won't play the whole thing, but I, I thought it was interesting to play for everyone. Yeah.

Speaker 1 (01:00:19):
The most intelligent inhabitants of that future world won't be men or monkeys. There'll be machines, the remote descendants of today's computers. Now the present day, electronic brains are complete morons, but this will not be true in another generation. They will start to think and eventually they will completely outthink their makers. Is this depressing? I don't see why it should be. We superseded the ProMag and Neanderthal men and we presume we are an improvement. I think we should regard it as a privilege to be stepping stones to hire things.

Michael Sharkey (01:01:00):
So a stepping stone to hire things. He's sort of like Larry Page with his speciesist spec, speciesist comment, uh, that he made to Elon Musks saying that we're just merely a stepping stone for a higher intelligence, uh, in, in evolution and comparing us to, you know, us to when we evolve beyond the, the ethal. So , do you think he's right?

Chris Sharkey (01:01:28):
Yeah. I mean, it's surprisingly interesting that someone can have that, that level of, of thought back then. I mean, it makes sense once you get electric circuits, you think, oh, well this could eventually be like a brain. But you know, some of the, some of the comments in there around it, how, you know, observing that the early stages of technology were like a stupid one, and then eventually an inevitably it could reach a higher level of intelligence. I mean, that, that kind of makes sense to me. And I think some people are thinking like that. And I think it's natural why some people, uh, you know, fear it because, uh, you know, they're thinking that back then, you know, it's now 90, well, how many years later? When was it sixties or something?

Michael Sharkey (01:02:04):
Yeah, 60 years ago.

Chris Sharkey (01:02:05):
Yeah, it's like 90 years later and 60 years ago, sorry. Yeah. Yeah. And um, yeah, like, and, and so now we're just at another a point like that where some people can have these clear visions of the future or a future where we'd be subjugated to, to the computers.

Michael Sharkey (01:02:22):
Yeah. Everything we're talking about trying to, in these papers model human thought, build different, uh, modalities. Uh, it, no matter how you look at it, you can't help but think we're just witnessing us trying to rebuild a human brain. That's, it's highly evolved. But I think what stood out to me about that clip is he, he later talks in it, which I, I didn't play about, you know, one day just like we can record data onto a cassette table record or whatever was around that. Yeah, yeah. He, he says, maybe we could do that in our, our brains one day and just instantly

Chris Sharkey (01:02:57):
Just said like, the matrix, like Yeah. The way you load the skills in

Michael Sharkey (01:03:01):
Instantly learn a new language like Chinese. He alludes to in the clip. And it, it seemed, the closest parallel I could relate it to was Neurolink, which, uh, is now gone into human trials. So

Chris Sharkey (01:03:14):
The only thing I disagree with on that though, is like, even even AI models, you know, they need to be trained systematically to adjust the weights in their brain. So it's not like you could just load in a pattern of neurons into your brain. Your brain would still need some kind of of training, but maybe it could do it with certain thought patterns that lead to you learning that skill or gaining that knowledge, for example. And I think I wouldn't have even considered that idea. Only some of the papers you've shared with me in previous podcasts around them actually mapping human thoughts and then drawing a picture of a cat or whatever that they were visualising makes me think, well, you know, if you can find out the format of thoughts and play all the thoughts in a certain order, perhaps you can force someone to think through learning at least knowledge, if not a skill

Michael Sharkey (01:04:01):
Soon will be, will be being shown a million images of dogs in our brain.

Chris Sharkey (01:04:06):
Just, yeah, just like with newly, I'm not gonna go first is the only thing. Like maybe I don't know what, what skill I would want badly enough that I'd subject myself to that. Not really sure. You

Michael Sharkey (01:04:16):
Said when I played you this clip, uh, the first time it reminded you of a a, of a book. Well,

Chris Sharkey (01:04:22):
Yeah. So I'm, I'm reading a George Orwell book called Coming Up for Air. And, um, I got it like at a secondhand book star where I had too many wines after dinner or something. And I, I never read it and I picked it up the other day and I was reading it. And early on there's this quote that I found really in my mind, very much relates to some people's fear about what's coming with ai to give some context of this book. It was, it was written and published in the late 1930s. So, uh, George Obel had been through World War I and World War ii, sort of, some people could see it coming. They saw the threat that Hitler posed. Like, uh, um, Winston Churchill was one of those people. He wasn't in parliament at the time, and he was one of the people who saw the threat of Hitler when everyone else was putting him on Time Magazine and saying, what a, what a great bloke he is.

(01:05:08):
And, um, so George Ola was clearly someone who saw what was coming with the war. So I'll, I'll read you this quote. I've abridged it slightly just for time, but I, I just want you to think about this and think about how some people feel about AI coming and what impacts it might have on the world. Who says, I was walking Westwood up the strand, enough noise to waken the dead, but not to waken this lot. I thought I felt as if I was the only person awake in a city of sleepwalks. And this kind of prophetic feeling that keeps coming over me nowadays, the feeling that wars just around the corner and that wars, the end of all things isn't peculiar to me. We've all got it more or less, whatever I thought. You think there's always a million people thinking it at the same moment.

(01:05:49):
That was how I felt. We're all on this burning deck and nobody knows it except me. Like turkeys in November, I thought not a notion of what's coming to them, it was if I, it was as if I'd got x-rays in my eyes and could see the skeletons walking. And then he sort of goes on from there talking about how he visualises London being war torn and like the, uh, the bombs coming overhead, the fighter planes, you know, which actually did happen to London, you know, with the Blitz and all that. And, um, he sort of talks in vivid detail about what he visualises happening. And I just, as I was reading it, I couldn't help but think that when you have the more extreme thoughts about what AI could become, I could see how people could definitely visualise a future in which, you know, it's bad because of what happens with ai. Um, and so, yeah, I, look, I don't know what to make of that. I just, um, it just really stood out to me as sort of being sympathetic to people who have the sort of more extreme beliefs when it comes to, you know, what this technology could lead to.

Michael Sharkey (01:06:46):
The future seemed obvious to these, I mean, this individual in 1960 and that future seemed logical, you know, written in that book by George Orwell before World War ii. There's a lot of predictions about where things might lead. There's all the different statements about AI safety. Do you think it's a case where everyone knows the direction we're heading and just like we all have that deep sinking feeling that maybe that's the direction longer term, and I'm not talking short term, but longer term, and that's this, that this uncomfortable feeling of, of the uncertainty of it all. But

Chris Sharkey (01:07:27):
I guess the point is, it's like, you know, if everyone had, like, if George will have made his opinions known and, and everyone had listened to him, what could they have done to prevent it anyway? And it's the similar with the guy who's predicting, you know, computers becoming smarter than us. If he'd wanted to stop that somehow, what could he have done? You know, like killed John von Newman or something. Like, I don't really, uh, I don't like, there's a certain inevitability to this stuff. And I think somewhat that's what Orwell's saying there. Like, he sees this future, but he's not gonna do anything to try to prevent it because he can't. And so I like, I don't know, and I just don't think that the sort of feric efforts of governments around safety or anything more than political machinations and them, uh, you know, trying to protect their interests. I just don't see anyone who actually is trying to, um, genuinely stop this because they think it's an existential threat. But also I'm not good at politics. I don't really know.

Michael Sharkey (01:08:22):
I don't think it's gonna stop. And, and like I always say, I, I don't know what we're witnessing in this time in history, but it's gonna be very, very interesting to see. We've got Apple Vision Pro with this augmented world. We've got ai, you know, still,

Chris Sharkey (01:08:41):
I mean it's so like, I know I've brought it up before, but the, the book series of books, Perry Wroten, where they discovered the alien race on the moon, and they're all sitting in their personal entertainment devices like zombies, just literally their spaceship is just parked on the moon. Cuz they're all so entertained by these devices. We're getting awfully close to that, where people are just sitting around with these things on their head just watching movies or pretending to work or whatever the hell they're doing. Like it's, it's, you know, it's actually happening. It's kind of weird. It

Michael Sharkey (01:09:09):
Could be that like you come to earth in the future and everyone does have their headset on. It's

Chris Sharkey (01:09:14):
Laying around.

Michael Sharkey (01:09:14):
It's like the, you know, TikTok now when people are on that thing, they're like zombies. It's, it's like a hundred times worse, uh, with these AI headsets. And then there's robots like attending to their bedsores and stuff because we conquered AI just to keep the entertainment rolling.

Chris Sharkey (01:09:28):
I mean, it's not that far fetched. That's what's so crazy about it. Like, they've got the tech, like you just need the robots next to tender of the bedsores or whatever it is. And we're there. Oh man.

Michael Sharkey (01:09:40):
Oh, it's gonna be interesting to, to see how this unfolds.

Chris Sharkey (01:09:44):
Get, get your orders from the government through your Neurolink and you're away.

Michael Sharkey (01:09:47):
What orders, I think everyone will just be sitting like, continue to relax headsets even there'll be no government and it'll just be robots looking after us.

Chris Sharkey (01:09:55):
. Yeah, exactly. I mean, like, yeah, it's kind of weird. I mean, I'll still try one.

Michael Sharkey (01:10:00):
Yeah, I'll definitely try it. Of course. Yeah. Alright, that'll do us for this episode. We heard you loud and clear about getting more content out there. We, we, we've talked about it. Uh, we have some ideas about slotting in some additional, uh, very focused topics for you to listen to. Um, some ideas around when we eventually get this gambling , uh, AI working. Well, we, we think we're gonna record an episode, uh, a day at the races to, to prove to you that it works, uh, and record that live for you so that you can look forward to a, a little bit more content from us, hopefully coming soon. But don't hold us to it. We're very busy with our day jobs.

Chris Sharkey (01:10:38):
That's right. We, we reserved the right to be massive hypocrites, . Yeah. At

Michael Sharkey (01:10:41):
All times. Very hypocritical podcast, this show. Uh, if you like this episode and you're not already subscribed, please consider subscribing, uh, leaving us a comment to help, uh, spread the word and doing all that stuff. I remind you at the end of every episode to do each week. We'll see you next week. Goodbye.