This Day in AI Podcast

Your two favorite luddites are back in Episode 8 of "This Week in AI" podcast. In this episode we cover the FutueOfLife.org Open Letter Asking for a Six Month Pause on AI More Powerful Than GPT-4, Talk About Implementing GPT-4 and It's Limitations, Cover Eliezer Yudkowsky's Doom Predictions in a TIME article, Discuss the Implications of ChatGPT Plugins, MisInformation and A Whole Lot More.

If you enjoy this podcast please consider subscribing and leaving a review to help spread the word. Thanks for listening.

00:00 - Powerful AI is GONNA KILL US ALL!?
00:16 - Six Month Pause on AI? FutureOfLife.org Open Letter
08:35 - Will They Take AI Models Away From Us?
12:30 - Implementing GPT-4 and Limitations of GPT-4 Access
18:07 - Eliezer Yudkowsky Says We're All Gonna Die From Powerful AI!
21:05 - How Will They Legislate AI if They Don't Understand TikTok?
22:06 - Is AI Development as Big as Nuclear Weapons?
23:40 - ChatGPT Plugins: One App To Rule Them All?
32:06 - ChatGPT IS The Killer App: Vector Database (Memory) & Embeddings
39:08 - GPT-4 Can Visualize Images From Imagining Them!
42:59 - Misinformation & Midjourney Access Stopped Due to Deep Fakes
44:45 - ChatGPT Plugins Hacked & DAN Plugin
47:06 - Disrupting OpenAI GPT-4 with Open Source Models
49:24 - Our Dear Friend Simon Willison: LLaMA 7B Model
51:46 - Large Language Models LLMs May Destroy Humanity
52:09 - Do We Know How to Fully Utilize GPT-4?
56:00 - Gambling with AI Update
1:00:21 - Sam Altman Lex Friedman Interview
1:03:00 - We Have #FREESYDNEY Mugs to Give Away!

SOURCES:
  • https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
  • https://futureoflife.org/open-letter/pause-giant-ai-experiments/
  • https://twitter.com/billackman/status/1641531788173385729?s=46&t=uXHUN4Glah4CaV-g2czc6Q
  • https://openai.com/blog/chatgpt-plugins
  • https://twitter.com/rez0__/status/1639259413553750021?s=20
  • https://twitter.com/hwchase17/status/1640171938470563840?s=20
  • https://www.microsoft.com/en-us/research/publication/sparks-of-artificial-general-intelligence-early-experiments-with-gpt-4/
  • https://blog.lastmileai.dev/using-openais-retrieval-plugin-with-llama-d2e0b6732f14
  • https://twitter.com/karpathy/status/1640042620666920960?s=46&t=uXHUN4Glah4CaV-g2czc6Q
  • https://twitter.com/simonw/status/1636756272631681024?s=46&t=uXHUN4Glah4CaV-g2czc6Q

What is This Day in AI Podcast?

This Day in AI Podcast is a podcast all about AI. It's an hour-long conversation on the influence and rise of AI in technology and society. Hosted by Michael and Chris Sharkey.

Michael Sharkey:
If somebody built a two powerful AI under present conditions, I expect that every single member of the human species and all biological life on earth dies shortly thereafter. Alright, welcome back to the podcast, Chris. Two episodes ago we had our G P T four release episode, and we talked about some pretty dark topics we got called Luddites Haters Doomsayers. But this week it seems like everyone's becoming somewhat of a, everybody who's Doos Day,

Chris Sharkey:
Anybody is a doomsayer. Now we've gotta stop the machine. Yeah.

Michael Sharkey:
Literally, I don't know what it is this week, but collectively, the internet or very influential people have said, uh oh, this is out of control now we must stop it. I

Chris Sharkey:
Like how they got Steve was on there. It's like, yeah, shut down ai. We don't need that. Yeah. Gotta take care of the world. And

Michael Sharkey:
So for those who don't know what we're, we're talking about, uh, the future of life.org, uh, institute, which I had never heard of until I saw this letter, their mission is steering transformative technology towards benefiting life, life and away from extreme large scale. And they wrote a letter , so

Chris Sharkey:
They create the risk and now they're like, oh, we created all this risk. We better stop it guys.

Michael Sharkey:
Yeah. That's what I don't really get. I, I need to learn more about this organisation. But it says, therefore we call on all AI labs to immediately pause for at least six months of training of AI systems more powerful than G P T four. This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, government should step in and institute, uh, um, how do you say that? Moratorium, moratorium, moratorium, moratorium, moratorium. Yeah. There we go. The joys of doing a podcast recording live . But we have,

Chris Sharkey:
Don't worry. AI will figure out the complex words for you in the future. You don't need to learn 'em

Michael Sharkey:
. Yeah. Uh, so we've got some pretty influential signers of this letter. Um, so we've got Steve Wosniak, Elon Musk, uh, Andrew Yang. He was a sort of tech guy that ran for president at one point, I think in 2020. Yeah.

Chris Sharkey:
Familiar. He worked at Google. I'm pretty sure.

Michael Sharkey:
The, the other thing that I find weird about this letter, people signed it like the c e o of stability, ai. You would think he of all people would want things to progress. The, the

Chris Sharkey:
Only conclusion I can draw is our speculation in the past weeks about where they're at with this stuff is that they're ahead of what we're seeing. And this is why they're panicking, because they're like, this is a serious thing. We actually have to do something about it. Otherwise, I mean, it just seems like odd pr to be so heavily involved in AI and then put your name to something that's like, oh, whoa, hold your horses guys. What are we doing here? Although I do have a couple of theories about why they're doing this.

Michael Sharkey:
The funny one, uh, is, is Craig Peters, who's the CEO of Kenny Images, he's like, please stop disrupting our business.

Chris Sharkey:
Yeah, yeah. I can imagine if you are like Google or someone like that, you'd want it to stop. But the thing for me is like six months just doesn't seem like, I mean, if you want this, I don't want this, and I don't think they'll get it, but if you want it, they're saying that here's, they need six months to do the following. Get regulatory authorities in place, get watermarking on ai. So you can tell stuff is from ai. Um, track everyone who has large computational abilities so they can know what they're doing and make sure that they're not training AI on it. And have policies for dealing with model leaks. Now, I mean, AI moves fast, but six months is not enough time to do any of that, especially the regulatory authorities. I mean, think about regulation for the internet itself. It's taken 20, 20 years or something to get it even in a place where it's decent and even that's, you know, not homogenised. It's, it's different all around the world. It

Michael Sharkey:
Took me a while to understand if this was fake when it first came out. I originally said, I, I didn't seem verifiable. I'd never heard of the saw. It's

Chris Sharkey:
Close to April Fool's Day. Had it been released, say tomorrow. You might have thought that.

Michael Sharkey:
Yeah, that's what I don't really understand. Like what's the, what's the objective here? Is this more people coming out and saying influential people and people in the know with this technology coming out and saying to everyone, Hey, we've gotta hit the pause button here. We know more than you, but I'm just not sure how to interpret this. Yeah. But

Chris Sharkey:
It's also like, Hey, you know, everybody else pauses too. Please. You know, it's, it, it's sort of weird. It's a weird futile gesture in my opinion. I just don't see that people who are dedicating their lives and careers to this and people who believe passionately in it are just gonna stop. Like when have humans historically had new technology and then not used it? Like, when, when Iron came around in the Iron Age, do you think the people were like, oh, we better not make weapons with this and defend ourselves and make better cooking implements and stuff just in case this changes society? No way. They're like, let's use this. It's fantastic. There's so many good applications for it. There's just no way people are gonna stop no matter what they do here.

Michael Sharkey:
Yeah. The whole proposition of this letter is completely laughable. I I don't even know why certain people put their names to it. It just makes them look stupid, I think. But it,

Chris Sharkey:
It might be just another case of everybody wanting to be seen as part of the international AI movement and given that this is the flavour of the week that they want to be part of it. I just, I just can't see this amounting to anything. I don't see how it could, and even if it did, it just means that people who are working on models outside of these sort of mainstream mass companies are gonna have time to catch up.

Michael Sharkey:
I don't, yeah, it, we went from G P T four being announced to everyone being like, wow, the productivity, the use case is the excitement to AI is now the next nuclear weapon.

Chris Sharkey:
Yeah. And I actually, I mean my, the sceptical side of me thinks they want to slow other people down. I don't think that the people who are signing this have any intention of slowing down themselves. I think they wanna slow other people down. And I think there's a big threat by these open source models and models that are being trained on the output of the major large language models right now. Because they're realising that even without all the tech, people can train models that are almost as good or approaching as good. And I just wonder if, if they're panicking that, you know, I I say they, but you know, like the people who are at the forefront of this stuff are worried that other people will get hold of their baby and they won't be the ones controlling it fully anymore. We

Michael Sharkey:
Talked about this last week that that bad actors potentially using other models to essentially harvest them. I forget the name of that technique, but, uh, yeah, you essentially, uh, relying on other models or other outputs, modelling extraction, model extraction. Yeah. And it sounds like Google Bard got caught doing that as well.

Chris Sharkey:
Um, yeah. Yeah. Someone, I dunno how official the source was, but essentially a high note researcher who had worked for Google at one time, essentially quit Google because he, he claims that they were harvesting data from chat C B T to train bar.

Michael Sharkey:
Yeah. So it

Chris Sharkey:
Seems, it

Michael Sharkey:
Seems like maybe there's a, a bit more to it, but Bill Aman tweeted shutting down AI development for six months, gives the bad guys six more months to catch up. Our enemies are working hard to develop their own open ai. It would've been a mistake to delay the Manhattan project and let the Nazis catch up. . I don't think we have a choice.

Chris Sharkey:
It's quite dramatic. But also, it's also true. I mean, I agree. I I don't think that you can, you can't, what is it, you can't put the, the pop back in the thing , you know, you can't, you can't reverse this entropy. I mean, you talk about entropy with this stuff, you just can't take it back. Now. It's all out there. I mean, the volume of work on the open source models alone, you're never gonna stop that. You just absolutely will not stop it. And the enthusiasm and energy behind this, you can't just tell everyone, oh, go on holidays for six months and forget about what you've been so excited about for the last year.

Michael Sharkey:
It just doesn't make sense to me. It, it feels like with, you know, climate change and nuclear power, we have this really viable solution where you build nuclear power plants and we can get energy really clean and cheap and yeah, there's some risk, but you learn to deal with them. Do you think that's similar to AI where it's like, by holding back and making this a big scary topic, we're gonna lose all of this benefit to humanity from ai?

Chris Sharkey:
Or is it, I don't, I don't think serious don't, we're gonna lose the benefit cuz I don't think it's gonna stop. And I predicted what would happen here. If you listen back, I think it was our very first podcast. I said, we, the general public who did not have access to these inner circles, they will take this away from us. And I think this is the first step towards that. I think this is the first step towards us not having access to these large language models at scale. I think that they want the regulation because they want to do it. And you're only gonna get access if you're in a very tightly approved use case or you, you know, someone and things like that. I don't think as they get towards the, uh, general intelligence that regular people are gonna have access to it for these exact reasons.

Michael Sharkey:
I agree. I, I think that is exactly the path we're stepping towards because you see the evolution of these open source models catching up to potentially what we have now with GBT 3.5, maybe four, which will touch on a bit later. And all of a sudden they're gonna say, this is a security risk. How do we take them away? Yeah. Yep.

Chris Sharkey:
You're not allowed to operate them. You're not allowed to have the computing resources capable of operating them, which is something they specifically mentioned in that letter, which means they're thinking it through, like, how can we slow this down? How can we stop people? We can't control running this stuff. I know control the hardware. If you control the hardware, they're not gonna be op able to operate it. And I think now, and this is, I was just claiming how great I am cuz I predicted this, but one thing I was wrong about was the importance of the open source models. I think now they're more important than anything, the training data, having access to open source models, having the ability to operate them and then soon having the hardware to operate them before that's restricted is very, very important.

Michael Sharkey:
So, but we're seeing models being able to be ran on MacBooks M one M two chips. Yeah. Reliably. And if that progresses to the point where you can run advanced models, how are they gonna even shut this stuff down?

Chris Sharkey:
I think, and that's why they're panicking. I think that's why they want a six month break. So they've got time to figure out how, how to respond.

Michael Sharkey:
We, we can't control this anymore. So we need regulations so we can control everyone else.

Chris Sharkey:
And yet remember just a couple of weeks ago we were talking about how they're like, oh, we acknowledge all the risk, but we're releasing this today. Uh, we acknowledge the risk, but here's a new announcement of a new technology and a new model. And now they're like, oh, you know, we've, we probably shouldn't have done that because people can actually use that to make their own stuff that we can't control.

Michael Sharkey:
Yeah. And it makes no sense to me. Also, interestingly with G P T four, I know it's only been a couple of weeks now in the wild, but we're not even getting access to the full token sizes. The image, the image interpretation, like really it's still vaporware essentially. That's

Chris Sharkey:
The thing really as far as I can see from using it extensively over the last week is the it ha it does have a bigger prompt size. It's 8,096 instead of 4,000 and whatever the other multiple of two is. Um, and that's great. Like the bigger prompt size does make a difference. G B T four definitely gives better results. Microsoft proved this empirically in their paper, which we'll talk about soon. But also just anecdotally for me, the G P T four results, despite its higher cost and despite it being a lot slower than chat G P T 3.5, um, it's, it's just better. It's so much better at following instructions. It's so much better in its results. It's so much better at explaining its reasoning. So it's no doubt better, but we're not getting the full picture that 32 k prompt size thing is not to be underestimated. It's a really big deal and I, I just can't help but think I, I dunno what the reason is, they haven't let it out yet. But there there's reasons there and I wonder if we'll even see it honestly.

Michael Sharkey:
Yeah. Without trying to border on being a conspiracy theory podcast, we

Chris Sharkey:
Just oscillate why we were like super positive

Michael Sharkey:
And like, this

Chris Sharkey:
Is great. Humanity's going up the next week. We're like, these bastards, they're gonna kill us all.

Michael Sharkey:
It's funny, I feel like I had that week when G P T four was announced where I was starting to get really fearful and lose sleep over this stuff. And then I went to pure excitement and I still feel like I'm in an excitement phase, especially with the work we do when we're not recording this podcast, starting to implement G P T four and seeing the success and seeing the joy it brings to people in their daily lives.

Chris Sharkey:
Yeah. And just the delight when it solves a problem so comprehensively and you're like, how, I mean, I know I sort of know how it did that, but it's still pretty impressive when you see it working, isn't it?

Michael Sharkey:
Yeah. Like I, I think that for me is the phase I'm currently in, and we touched on this last week, whereas if you just stay in the moment and you just work with the confines of what we have access to today, you can do some really great things with it, but you also realise the limitations, which is the thing that calms you down in the shorter term.

Chris Sharkey:
That's right. And I think, you know, when it misunderstands your instructions, one thing they talk about in the Microsoft paper is prompt sensitivity. And I've definitely noticed that a lot in the last few weeks where, you know, you want to add an extra instruction to a prompt, you know, in the instruction section of what you're telling it you want it to do. And one sentence or one word can vastly change the consistency of the output. And I think, you know, those kind of things are limitations where, you know, they need to be solved for this to get to the level that everyone's expecting. And I know that there's a certain talent for that with people, but I think at some point that's what the AI will handle itself, is being able to better, uh, handle that sort of sensitivity of prompts.

Michael Sharkey:
Yeah. The prompting is definitely something that you, you need to learn and is difficult. In the Lex Freeman interview with Sam Altman, he even admitted he's not very good at prompting and a lot of people still do single shot prompts where they like write a blog post about this and expect a good output. Yeah. But really the best use cases I've seen is when you're going back and forth and, and doing pretty strong prompts as if you were delegating to someone, uh, in a company as opposed to just writing out one sentence and expecting a magical result. So Yeah,

Chris Sharkey:
That's right. And that's one thing I really want to try with the larger prompt size with G P T four when it's available. And something I've just been, you know, thinking about, you know, with my own limited human brain is let's say you've got a 32 K token prompt and somewhere in the middle of that prompt you have an instruction that's like really important that needs to be adhered to in order for whatever application you're trying to work. How does, how does it know the relative importance of that statement out of that much context information? Surely there's an art to that, surely there's a way you can tell it, you know, these are my non-negotiables, these are nice to haves and things like that. Or is it just good enough that it's able to take it all in at once? And you know, in my experience with the smaller prompt size, I've definitely seen cases where it will just totally overlook explicit instructions. You've given it

Michael Sharkey:
For application development that makes it incredibly hard because if you're trying to restrict it or trying to give it essential, uh, pieces of information and a lot of them then it, it, knowing that it might just miss something and it's flawed like a human also introduces a lot of risk into your application.

Chris Sharkey:
Yeah, that's right. The other thing is when, you know, you give it a lot of context information, it's how do you emphasise the relative importance of that information? So for example, if you can answer from this snippet, um, then do it, and if you can't then fall back to this and fall back to this or, you know, maybe it's a medley of those things that is the right answer in this case. And sort of giving that, waiting to the context information, I think, at least for in my experience, is something that you need to work on because you know what context you give it greatly influences the output obviously, but you want to give it as much as possible, but you also don't wanna overload it and and increase the chance of a, an answer that isn't quite where you want it to be.

Michael Sharkey:
Yeah, I i I think it's definitely, um, um, like back and forth massaging technique, which is kind of hard when you're writing code to let it have a response and then sort of work with it, um, in in in the scheme of things on an output going back and forth.

Chris Sharkey:
Yeah, unless, and I think, I think this is where like if you look at the Microsoft paper, one of the things they were looking at is the, they called it um, uh, G P T or like, you know, large language models as a judge. And the idea is that what you do is you have multiple AI agents and the other ones will judge the output of each to see how well it adhere, adhered to the original problem. So essentially, rather than just having this one shot thing where you give a prompt and a context and expect an output, you then run that through a series of steps that will judge and refine that output itself to get it to where you want it to

Michael Sharkey:
Be. Just like our brains do

Chris Sharkey:
. Well, yeah, I suppose like we have the two hemispheres of our brains that are assessing each other's thoughts and things like that. And, you know, you can also critically think about someone else's opinion and you know, they've, you've got all that seven hats theory where you take all these different perspectives and I, I think that it's going to be that not just multimodal, which will help but also multimodal that are, that are all working in unison to, to try and get a better outcome. I think it has to be that you can't just do it in code, you can't just write code that will evaluate the output of the model because then you, you get all the drawbacks of that sort of fixed logic, which, you know, you, you are unfettered by when you're using these large language models.

Michael Sharkey:
I really want to go deeper into this conversation, but before we move on from the AI's gonna kill us all topic . Yeah. I did wanna cover one more thing, which is this time article by, I'm gonna mispronounce this, but I'm gonna try my best. Um, Ellie Aza, uh, y Kowski who is an AI researcher and a bit of a sort of doom and gloom guy on Twitter. He's warning everyone that we're all gonna die. Um, and he said about that, that letter we mentioned earlier that, that a lot of, uh, important people have signed asking for the six month pause. He wrote a time article saying that's not enough. And I just wanted to read out two paragraphs from that article real quick. If somebody built a two powerful AI under present conditions, I expect that every single member of the human species and all biological life on earth dies shortly thereafter.
There's no proposed plan for how we could do any such thing and survive open AI's openly declared intention is to make some future AI do our AI alignment and homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic the other leading ai uh, leading AI lab, DeepMind has no plan at all. And honestly, from hearing, I only was able to listen to some initial samples of it before we recorded Lex Friedman also recently just interviewed Gid Kowski and I gotta say this guy is like the extreme of we're all gonna die, uh, type scenarios.

Chris Sharkey:
It's very hyperbolic. I mean like right now just unplug the computers, right? , like I I see a future where AI's pervasive and everywhere, but I don't think everyone will instantly be killed. .

Michael Sharkey:
Yeah. And so I guess to round this discussion out, what, what's your takeaway is, is AI just gonna become like the next nuclear weapon that needs protections and legislation immediately? Or is this just gonna pass and next week we'll be back on the road of using AI to improve our lives and, and productivity?

Chris Sharkey:
I think there needs to be, I think for them to take drastic action, there has to be a catalyst. There has to be some sort of disaster or something caused by AI for them to take serious regulatory action. You know, when you look at major regulations and how they evolved over the years, it always happens after some big event. And then as a reaction to that event, they'll then start making those changes. I just don't see governments proactively legislating something that is still in its formative stages. I'm like, I'm no expert on that stuff. Definitely not. But I just think we're working at different rates of speed here and I don't think even if they get this six months, everyone just suddenly agreed to like, pens down. Let's, let's stop, stop train. I, I won't train mine. If you won't, like, if everyone somehow agrees to such a retarded agreement like that, then maybe, but I, I just, I just don't think anything's gonna change at all. I think it's a, it's a furic gesture and it's meaningless.

Michael Sharkey:
It'll be interesting to see how it plays out. Whether this does go down a path where politicians start to rise up and say, you know, we've gotta do something here. Or if nothing really changes, it does seem to me like, I mean these

Chris Sharkey:
Are guys who don't even understand. Like if you've looked at the TikTok legislation they're trying to get through, they don't even understand the basics of technology. They are not gonna comprehend this. And I'm not, I'm not insulting politicians, I'm just saying that it's not their area of expertise, their ability to understand this is going to take time and I I I don't think they're going to understand it until they see the real world effects of it. You're just not going to be able to go on this abstract thought and go, okay, I can see the impending risk here. Let's let's make detailed legislations that, uh, choose how people apply the technology. It's just too complex. I think though,

Michael Sharkey:
The thing that still scares me about all of this is deep down, every single person, whether it's Sam Altman at OpenAI to Elon Musk, to nearly everyone involved in AI today is coming out and saying there is a small chance, maybe 5% or a chance we won't be able to reach alignment with, uh, sort of superhuman ai. And this thing could be a real risk to humanity. And 5% does seem like a lot still that we shouldn't sit around and think this sort of seriously or we shouldn't talk about how things could eventuate. And I don't think it, it's helping right now where if someone comes out and says that the immediate reaction of most people out there, uh, in this community and just people in general that are interested in AI is to slam them down and say, you hate ai, you're a ludeye at like, stop. You just don't want technology to progress. But I think there is some part of it almost like when we were developing nuclear weapons to say, yeah, hang on, like maybe we should be careful how we use these. Yeah,

Chris Sharkey:
I I think you're right there. I think there's, there's two, two massive extremes here. One is we've gotta stop everything immediately and solve this regulation thing. And the other one is don't do anything. It's not a concern. I think it's probably like, I think you're getting at somewhere in the middle. Let's start having the conversations now it'll take way longer than six months, but it's worth talking about cuz whether or not you believe what we have now is approaching average general intelligence or not. Um, if it, if it's going to get there at some point in the next 20 years, then we absolutely need to start the discussions now given how slowly those processes move.

Michael Sharkey:
So moving on to another big announcement from OpenAI that came out shortly after we recorded our podcast last week was chat g p t plugins. And for those that are not aware, these are essentially plugins that you can run in chat G p t. Some of the examples cited were Instacart. So you could essentially work with the ai, say, come up with some recipes for this week and then ask, uh, chat g b t to go and order them in your Instacart, which is a pretty cool example and we mentioned and ask

Chris Sharkey:
Someone else to go and order them for you. Yeah,

Michael Sharkey:
Sure. . Yeah, well I've got, I've got an aside on that in a minute that you'll find hilarious. So can

Chris Sharkey:
You also please thank them for going and doing my groceries for

Michael Sharkey:
Me? But yeah, so essentially it can do the Instacart order thing. I think some of the next iterations if they ever let us do the image thing is taking a picture in your fridge and saying like, you know, what do I have versus these recipes. The rest gone shop for that. I mean there's some pretty cool interesting cases around that. I would say nothing groundbreaking. It does feel a little bit to me, like I'm in two minds with this one part. It's like, oh, congratulations, you just invented a more advanced Amazon skillset, uh, that, you know, these have been around for quite some time with your voice assistance in your house. But then on the other hand, I'm like, this is a whole new development paradigm where, uh, chat g b T can go and figure out how APIs work, interact with them based on not code but human instructions, which is how you code these plugins. You essentially just write instructions and documentation for your api and then the AI just figures things out, which I think is fundamentally the biggest breakthrough and, and biggest part of this news.

Chris Sharkey:
Yeah, I agree. That is the interface is the news because really you could do this anyway, right? Like you could, you could hit the thing, tell it what instructions are available to it, it can output them as text, and then your code can then go execute those instructions with like, you know, APIs, um, and anything else you wanted it to do. So there's examples in the Microsoft paper of like, and this is even before the plugins were announced, they did it a while ago, you know, manipulating a calendar, a to-do list application, things like that. All those were already possible. What this does, it just brings it closer to the system and gives it these tools at the level where it can control it through language. So I think that's an essential step in the right direction. I don't think they've added anything that wasn't already possible. They've just made an architecture that makes it easier. The, the

Michael Sharkey:
Biggest part of this announcement that I'm trying to wrap my head around is what, what does this signal from open AI's intentions in the future? Because we, it went from being nonprofit research to having APIs to enable other people to use AI and empower everyone as they say, to, to benefit from AI to being the fastest growing consumer app of all time with chat G P T. And now they're, they're introducing plugins, which feels a little bit like an app store almost. So are they now pivoting and saying, okay, we're gonna go after this one app to rule them all with AI because this is the way we're being led, this is what most people are interested in. And it makes me wonder, do they just ditch the APIs in the future and say we're just the next Google type company with services around ai?

Chris Sharkey:
It's a really good point. I mean, it's literally like they've struck oil, there's oil spurting out of the ground and they're like, oh God, we've gotta build like a factory around here to start getting this. We need trucks to get it out. We need all the things around it. And they're sort of not making it up as they go along, but they're, they're sort of realising the implications of what they've created as they go along. And meanwhile they're racing against everybody else who's catching up with them. So they need to do something solid that makes them different and, and get it embedded into other products and things before everybody else can do the same thing. So I think it's partly that I think it's a frantic mad rush to work out who they are and what their identity is and how they end up

Michael Sharkey:
Yeah, that's the thing everyone's talking about. Oh this is as big as the iPhone app store, but to me I'm like, this is way bigger than that. This is potentially the final app. There's no other apps because

Chris Sharkey:
Yeah, I mean someone pointed out with open AI's embeddings for 50 million, you could basically embed the entire index of the internet and just have your own Google. Um, you know, by doing that. And I mean that's a trivial way to look at it. Cause obviously Google does more than just that. Like they have rankings and things like that. But you know, there's so many possibilities with the, the suite of technologies that OpenAI has that it's hard to know what to do. I mean, in a way they might be having the same panic that we have spoken about where you get overwhelmed by all the different things you could do. And so you dunno which ones to, to work on.

Michael Sharkey:
Yeah. Perhaps this is why they're picking away at, at things we understand today. Like everyone understands the concept of apps. So it's like, okay, well we have plugins and these plugins are sort of like apps that work in chat G B T and it means we can cooperate with all these great brands and work alongside them. But the most interesting one for me was Zapier in here, right? AI's phenomenal at automation. It can also read APIs. So if I'm Zapier and, and I'm weighed over at Zapier, who's the CEO there and I'm, I'm looking at this, surely, I mean, and this is for every SAS business really, chat G B T could replace it all because it can read into APIs, it can connect things, it can automate things, it can build eventually, I'm sure in the next version, full custom blown interfaces to do tasks. If that's what you want outside of a chat interface,

Chris Sharkey:
It definitely can. Yeah.

Michael Sharkey:
So to me it is truly the enterprise SAS killer app of everything in existence because you could rebuild, uh, you know, really any app including our app, our business today by, by working with, uh, chat G P T. So to me this, this could potentially be something that none of us can fathom. Like the disruption here could be so foundational to just, you know, everything we take for granted today or it could just be a terrible plugin store .

Chris Sharkey:
Well, I'm reassured at least as far as our own business goes that we're still a couple of steps away from there just because of what we discussed earlier. It works great until it doesn't and, and that, you know, there's some sort of disconnect in an issue and or it's slow or, you know, there's, there's a lot of practical limitations that stop this universal app thing now, but I, I agree with you that that's the direction it's heading. Suddenly it's a lot easier to go to that to solve all of your problems. And I think people are discovering that with the G B T 3.5 Turbo because of its lower cost, you can start to throw problems at it, you would've previously solved with code or with ui just with, with using the, the chat G B T as an api. So you can do things a lot faster and cheaper. Well maybe not cheaper, but like Cheapish, um, rather than going off and building a whole application to do it. So yeah, I, I can see that

Michael Sharkey:
I'm gonna make a pretty profound statement now and I think I may regret this , but I'm gonna do it anyway. , I think there's actually no killer AI apps outside chat G P T I don't think there's any other apps, which is why I think they might kill their API eventually and just go it alone. Or at least you know, people will stop using it. Here's why.
Look at some of the early first movers that raised a tonne of money in this space, like, you know, trolling through your code, replaced by Microsoft Co-pilot, um, to understand it like Jasper to write copy. Like why not just go to chat G P T and just ask it directly to me. All of these wrappers on their APIs, which is essentially all they are, and we've talked about this before, where you could argue Salesforce, which is a, a CRM for those that don't know that helps people manage customer relationships. You could argue it's a wrapper on a database like you UI on top of a database, but my feeling here is there's no killer apps and that is a problem. Like think about all the examples we're seeing, like, oh, I got chat G B T to help me do pong, I got chat G B T to help write better suggestions to support tickets. They're great, they're great, uh, implementations of it that are helpful to people, but they're not killer apps. They're not game-changing apps like a Google coming along in history to me chat. G P T is the killer app. It's the only app.

Chris Sharkey:
Well, for your prediction to be true, I think they need to add a few things. One is going to be, they, they're gonna have to settle on like a vector database and an embeddings. I mean they've got their embeddings model now, but right now you sort of have to do it yourself.

Michael Sharkey:
Can you explain to people who are non-technical what all this means? Just like

Chris Sharkey:
Yeah, sure. So what embeddings are is basically where you take two pieces of text, you know, and you say this is similar to this. They run it through, they call it a cosign algorithm, honestly I don't really understand it, but the output is a number which basically says how similar two pieces of text are. Then you store all of that in a vector database and what that means is sort of like a tree where it can follow a path to work out the most similar thing fairly quickly, right? And there's various algorithms you can use for that. The one we use, for example, is one that Facebook released called F A I Ss. And so what you do is you get large amounts of data. So in our case, maybe like our user documentation or a website or whatever it is, you run it all through this embeddings model, right?
And it stores all the numbers of different snippets of texts and you pick the chunk size. So for example, you might have 2000 characters of texts per chunk and it divides them up, writes a score. Then what you do is then you can search that, um, with, with a string. So let's say you wanted to know about, I don't know, uh, the pricing off a website, for example. It'll search and find snippets relevant to the pricing. It'll then use that as context to a chat g p t query and use that in context information to answer the question. So it's like a two-step process. In fact, it could be more than two steps. You can actually run it in an iterative way, um, to get an answer. So a simplistic

Michael Sharkey:
Way of thinking about this would be memory.

Chris Sharkey:
Yeah, like long-term memory. Exactly. And you know, one of the things the models could do eventually is update that database themselves as a form of memory. So the example I gave earlier with Google, someone joking that you could for 50 million train the whole internet, they've basically calculated, you know, what is the sort of amount of text data on the internet. Um, could you create embeddings for all of that? Obviously there'd be practical limitations on memory and other things. Like it's not as simple as they make it sound. But the idea is that even with the 32 k prompt size, that isn't that much. If you want to say, write an essay about the every time that, you know, Harry Potter was in a fight in the Harry Potter series of books, whereas using, um, using embeddings like this, you could do that because it would be able to gather up all of the different snippets, do it in an iterative process if necessary, and then use that larger prompt size on G B T to G B T four, I mean, when it comes out to do tasks like that.
So I think open AI is going to need their own one of those built into it, um, in order to be the, the killer like app to end all apps thing. And then I think the other one is the thing they've just done, which is the plugins where it's like, okay, it's able to take actions now across all the different things because obviously without that it, it can't do what you are describing. But with those, with those elements, oh, sorry. And then the third one is data gathering, right? So it can't be this universal thing if it doesn't have access to like timely information. I mean that could also come through the plugins, let's say, actually, what am I talking about? That's what the plugins are for too. Gathering the data to build these, um, vector databases of embeddings. And so yeah, I, I kind of agree with you. I, I think that maybe that's what they're building to it certainly is possible, but

Michael Sharkey:
What about the next step where they, they say, okay, thanks Expedia for helping people book trips through your plugin, but we're good. We're just gonna get our ai, which is closing in on AGI to just go and read the booking systems for all of the different airlines, figure out better prices or when to book and just completely cut these middle, uh, middle men businesses out from the equation. I think that could be the next step in terms of looking final. It just depends.

Chris Sharkey:
Depends if they wanna solve, I mean, will those be trivial problems to them at that point? Like if you've made a whole new life form, essentially, do you really give a shit about booking holidays? I don't know.

Michael Sharkey:
I feel like there's gonna be this phase over the next, and I'm, I probably will be wrong, but over the next five years where we think this race to AGI is, is gonna happen really quickly, but to me it feels like there'll be this five year window of increased productivity interfaces changing rapidly, like how we interact with computers is obviously gonna change rapidly or, or fairly rapidly. But I think for all those changes to occur, the AI needs to get really good at helping us build interfaces and helping us implement AI into everything in our world. If it doesn't, that will take much longer because it's, as you said, it's not that easy to work with. Like there's a lot of, there's a lot of problems.

Chris Sharkey:
Yeah. Well, I mean, one the main things in that, that Microsoft paper on G P T four were around using the AI to train other models and the idea that maybe you have like these chained models that, you know, the AI is, I've mentioned this a few times before, the AI is selecting the right tool for the job in terms of models. And some people speculate cuz no one really knows how, um, G P t four works, for example, some people have su uh, speculated that it is actually multiple models and there's a sort of routing process going on where it's picking the right tool for the job depending on the question, which is why sometimes you see higher latency on certain questions than, um, than others. But, you know, that's just speculation because they haven't told anyone how it works. Um, but yeah, so I think that in a way that AI may be able to help it help itself, like, you know, building those interfaces coming up with new ways to interact with the humans that that can get it to where it needs to be. Um, and yeah, I, I think that is how it gets progressively more intel, intel, uh, intelligent. Like the idea is that the, this model has no internal motivations and goals, but it's, it's, it's a step towards increasingly intelligent systems.

Michael Sharkey:
Yeah, I think as you get these plug-ins and this memory component you're talking about, that's getting it closer to agi, these are all just steps where the AGI eventually is like, I don't need any of these silly apps, I'll just go direct to source to help my human counterpart. Yeah. So if there's some, if there's some form of alignment there where the AI truly does help humans, that's the path it'll go where it's like, well, I don't really need any of these like middle things. I don't need any of these silly interfaces, I'll just build them on the fly and connect direct to source. So it, it does seem to me like we just can't fathom the disruption. I think your whole operating system could be AI based in the future.

Chris Sharkey:
Yeah, and one of the, one of the things related to that that is really interesting is that G P T four without the multimodal, so just the text based part of it is actually able to understand and manipulate images when working with a language like, um, latex, which is the way you write like mathematical formulas, pie plot, which is a plotting library for, um, uh, for Python. And there's this other one that's called like pick X or something like that. I wrote it down earlier, um, maybe I didn't tick Zed tick Zed, if you look that up, T I K Z, um, which can draw sort of like simplistic images and shapes and things like that. Now they basically show that G P T four can use a tick Z tick z sorry, I, I dunno about a tick zed description, like text, like instructions of how to draw an image, understand what that image is so it can visualise and manipulate it. So for example, they kept drawing unicorns, but they left the horn off, right? And they'd changed the way the unicorn was drawn using this, this visual language, right? And then the G P T four was able to add the horn on in the right spot, in the right colour and everything. So like how's it doing that? It's, it's, it's visualising, just, it's

Michael Sharkey:
Predicting the next word though, , it's

Chris Sharkey:
Just predicting the next word. But it's, but it's able to take those instructions and understand what is needed. And also, you know, it must be in a way, I mean maybe not in the way humans do, but it must be visualising what that sequence of output leads to. Like for example, it can sort of simulate executing code by reasoning its way through the execution of code. So like you can, it can run fairly complex Python programmes, even though it's not a computer, it's not running the code, it's just stepping it out through step-by-step logic to work out what would happen if this programme was executed with this input and it can do that visualisation. So that's a long way of me coming back to your point and saying, who knows what all these plug-ins will do. Like, once it knows it has the power to do all these extra things, like gather this information here, take this action here and see what the response is, like, you know, a sort of experimentation stimulus response, who knows what abilities and sensors it will be able to develop out of having that extra stuff. Because they don't know, they didn't know when they were training it, it would be able to do this visualisation stuff and be great at it. So how do you know when you give it all these extra abilities, it won't develop further abilities stacking on top of that that we can't even fathom yet? Well,

Michael Sharkey:
Interestingly, the browser plugin they took offline, uh, over the, the weekend, uh, just after launch, they just completely took it offline. They're not letting that many people in right now. So who knows what these plugins potentially could unleash. But I think back to your point, maybe it tells us more about how humans learn and think rather than the AI itself. You know, maybe how it's visualising those problems, uh, through interpretation of language is how we also do there. There's nothing to

Chris Sharkey:
Say I'm increasingly of that opinion. And I, I think that's the thing. I think they really are creating an intelligence and I think that's similar to how our intelligence works.

Michael Sharkey:
Yeah, I think that, again, we've mentioned this show after show, but to me it, that's what scares me the most about all of this is like, how dumb are we or how basic are we? And how quickly will the supersedes, but don't worry a six month ful health.

Chris Sharkey:
Yeah. It, it just seems like Ill thought through, I don't know, like a petition's the right way to go about it. I mean, I'm not experienced with this stuff and it's also very American centric, right? Like all of this stuff, you know, is American and I, I don't know, I mean, I didn't look through the signatures I must have, maybe it's universal, but it seems very, uh, they're not, they're not looking at what's going on everywhere. They're just looking at what's going on in their sort of inner circle and they're just broadcasting their, their current state of thinking. Yeah. Which I wonder if a strategic AI that was trying to take over the world, what actions it would take

Michael Sharkey:
. Yeah, maybe.

Chris Sharkey:
I, I read a bit this week about, you know, the AI and its powers for misinformation and Microsoft experimented with this and geez, it's good at misinformation . So you start to wonder, and actually one of the sort of conspiracy theories around the AI is, is AI being put out specifically so no one can trust anything anymore? Like how do you know what you're, you're seeing in terms of images, watching in terms of videos or reading in terms of text is made by AI or humans? It's gonna start to get to the point where it's like, well, how do I know if this is real or not? And maybe that's intentional. Well,

Michael Sharkey:
We saw mid Jonney Paul, their, their image creation capability because it was being used for deep fakes across the web. Yeah. And, and so they just put a a stop to it because the new version five is so good that people were believing some of these deep fake images that were being created like the pope at Burning Man and all of these. Yeah.

Chris Sharkey:
And I think, I think all of us, like people who listen to this podcast, people who have the ability need to start to see themselves as AI hoarders, like start hoarding data, start hoarding models, start hoarding papers because how quickly, like we announced something one week on this podcast, then the next week you can't even get it anymore. Like, I didn't have time to try that plugin. You mentioned this stuff is so fleeting and so rapidly changing. You need to get your hands on what you can because there might be stuff out there you could use later that's gonna be taken away from you never to be seen again. And it could be permanent if, you know, some of this regulation stuff comes in, you know, it's, it's a really important time to be on top of this and staying on top of what's happening if you're interested and, and feel like it's going to have the influence that we think it will

Michael Sharkey:
Over on Twitter. Uh, a user by the name of Res Zero was able to hack the new chat G B T API or, or plugin capability. Um, and he, he says this morning I was hacking the new chat G B T API and found something super interesting. There are over 80 secret plugins that can be revealed by removing a specific parameter from an API call. The secret plugins include a Dan plugin, which I would love to get my hands on, crypto prices plugin and many more. And you can see, um, if you are watching, so

Chris Sharkey:
The open AI team's just sitting around trading crypto with all their amazing tech,

Michael Sharkey:
Well, it sounds like it, it also sounds like they, they have a Dan plugin, so they clearly have a sort of God mode, do whatever you want mode, which again, they're not handing over to the general public. But I mean, I think the Dan plugin, it literally says, I like what they call it Dan a plugin that will change chat GT's personality, which I think would be fascinating to use. So it does show that there's all of these different, um, plug-ins coming along and they're unleashing all these capabilities and cl I mean, clearly you could put in, you know, anything you wanted in there, but the, the striking thing to me was someone also got these plug-ins that was specifically designed for chat G P T to work easily with Philanthropics Claude, which is another chatbot, which is lesser, no, I guess

Chris Sharkey:
That's the thing. Language, like, you know, language is the universal interface. I mean that's always been the philosophy of, of Unix, right? Where you pipe one command into the other and text is the text is the interface, right? Like, because it's compatible with every programme. So it kind of makes sense if these plugins are working off the large language model's, ability to understand text the thing it's best at that they would be universal.

Michael Sharkey:
And then we saw the same thing using chat G P T plugins with llama. So these plugins, and again, not, not that moti for, for open AI either. Which brings me back to the earlier points of they're becoming this mass consumer app, they're developing these plug-ins now and maybe chasing,

Chris Sharkey:
I mean they're holding onto the end of a fire hose being flung around in the air. I, I don't think they're probably in as much control of it as they thought they would be.

Michael Sharkey:
Yeah. And the interesting thing is they themselves could technically be disrupted by these open source models fairly quickly. I, I guess the only unique advantage they have is how they've structured it, potentially the MO multimodals and then their access to data. But everyone's gonna close that up. And if

Chris Sharkey:
You wanted to slow down the competitors, you know what you do, you try and constrain their hardware resources, you'd try and not them to be able to train massive models over a certain size. Like it all sort of starts to add up where this petition is, is very convenient for open AI to sort of stop the competitors. Yeah, that's

Michael Sharkey:
The thing I don't think anyone's realised is I know obviously none of them have signed it and for good reason you wouldn't expect them to. But how do you know they're not behind it? How do you know open AI didn't write this and you know, write emails to all these people to get them to sign it for this very reason because they want to restrict other people, you know, they want to be the ones that control this potential agent. Yeah,

Chris Sharkey:
And I also, I also think like in the open source community, the energy and excitement lately the last couple of weeks has definitely not been around G P T four. I mean a little bit. Yeah. But they tend to be more on the sort of consumer application styles. Like look at what I can do with this. But the actual talk about research and experimentation and results and benchmarks and all that has been all around the open source models. It's definitely got a lot of energy behind it and you know, the timing sort of adds up.

Michael Sharkey:
I think

Chris Sharkey:
This is just a list of, of awesome decentralised large language models. You've got G P T for all, which is trained on 800,000 G P T, 3.5 Turbo Dolly, a large language model trained on the Databricks machine learning platform. Bloom, Z C P P as an in inference of hugging faces, blooms models alpaca was spoken about before. LAMA ccp, cpp, which is Facebook's LAMA and c plus plus. And the list goes on. There's, and there's also these open data source models that are a combination of all the different stuff out there that you can use to train things. So there's an absolute movement in this area o of of models that are approaching the quality of open AI stuff. I mean, of course they're not as good, but geez, they're, they're decent and they're smaller as we've discussed. People can actually run them and run them in a way that's not gonna cost him 50 million to do some experimentation.

Michael Sharkey:
Well our, our dear friend that doesn't know he's our dear friend that we quote on this show a lot. Simon Willison tweeted some wild speculation here, but I think it might be possible to train laa seven, uh, B size model for 85,000 now and maybe run that model directly in your web browser with more capabilities than chat GBTs through hooking up extra tools to it. Like Bing. So this idea of plugins, yeah, I believe that can then be brought to your own model and 85 thousand's, a lot cheaper than 10 billion. That's

Chris Sharkey:
Right. And also, you know, all of this leaves out fine tuning and customising it to the specific problem you're trying to solve. You know, like in those cases, the smaller models often work better anyway if you're trying to customise 'em for a specific task. Maybe not everyone is, but you know, when you're training it yourself, you've got those advantages.

Michael Sharkey:
So perhaps it's not one app to rule them all. Perhaps. It's like literally a lot of AI models getting more and more sophisticated competing and it's literally

Chris Sharkey:
Evolution. Yeah. Like the plugins and the tools and the tool stacking, I'm actually about this as we're going on with this podcast, is I think letting the AI know, hey, these are the things at your disposal. Cuz one of the things that Microsoft went through in their paper is they gave it like real life problems to solve. So they had it as like a, almost like a marriage counsellor and it would say, you know, this person's saying this, this person's saying this, this person's saying this. What do you reckon is going on here? And then they're like, oh, this person seems to be talking past this person, they need to listen more, they need to do this like a pseudo psychologist. And then they gave it things like, oh, the ceiling in my roof is leaking something I can relate to quite personally. Um, how would I solve this problem?
And then it told a series of actions it would take to solve that problem, including contingencies based on what would go on. Now you give that same reasoning, logic, the tools to actually do that. Like the ability to make a phone call, write a, write an email, um, you know, check the stats on some sort of home automation system or I don't know what, but like you give it a stack of tools, suddenly it can solve real world problems. Like it really is that step towards a sort of general intelligence that has those real world actuators that we were talking about. It, it's coming in the form of a plugin, which sounds so simple or it's just like a browser plugin, but giving it these abilities and it having knowledge of the capabilities of those tools. I mean, it, it's a, it's a big step. I love a logical step. Of course. I love

Michael Sharkey:
How large language model plugins Major three humanity. I mean, that's the current proposition.

Chris Sharkey:
, yeah.

Michael Sharkey:
Maybe, maybe I'm laughing now and like the world will be ending and so we

Chris Sharkey:
Just get taken down by one big app store. Yeah,

Michael Sharkey:
It's basically app stores are gonna destroy us.

Chris Sharkey:
I forgot my password. ,

Michael Sharkey:
Steve Jobs you all along. Uh, so the one thing I've, I've really wanted to touch on, and I, I think we've, we've seen some of the limitations or at least talked about the limitations, um, but Audrey McCarthy who, who uh, started at OpenAI, went to Tesla to work on their self-driving AI and then now is back at OpenAI, tweeted good example of us not seeing max G p t four capability yet, in my opinion, prompt design tool use metacognition strategies, eg idea of attempt critique, retry capabilities model, et cetera, are very likely to go a long way, uh, in response to another tweet. But my point here is that I just don't think we as technologists and users of of G B T four are even going to discover its capabilities or utilise them correctly with the way we interact with these models. It just seems like no one is really exposing their true powers yet or has any way of doing it. And we're all sort of learning as we go, but by the time we start learning a few things, the next model's out.

Chris Sharkey:
Yeah. And, and I think it requires deep thinking and it's hard to get the time to do that, right? Because there's so many advances. You just gotta keep up with the, the technology that actually thinking through the implications of this is difficult. Which

Michael Sharkey:
Makes me think that the, the team at OpenAI has seen a lot more than we the general public have seen because they're able to have resources constantly, as they say, red teaming this model and trying to exploit it or, or do things that people might do in the real world with it that they necessarily don't

Chris Sharkey:
That great want it to do. That's a great inference. You're almost as smart as G B T four . I um, I agree with that because you're right, they're actually sitting around and they are thinking it and they're trying it.

Michael Sharkey:
Yeah. And we just don't, like, even when I try and push it right now, I'm still getting like outages where it'll stop halfway through and I'm like, can you resume that? And then it just crashes and says, start restart the conversation. And so I just, we're not getting the full token size, we're not getting the image, uh, capabilities and we're getting a model online that hasn't scaled very well and is slow. So it's really hard. I honestly, most of the time when I'm trying to push it give up, cuz it's just so slow at responding, I'm like, oh, I don't have time for this.

Chris Sharkey:
Yeah, and it, I agree. Model experimentation or experimenting with the model, sorry, is very time consuming. And you know, you've gotta, you've gotta iterate on it. You've gotta try the data and you're also constantly wondering, like I said earlier, is this one word in my prompt throwing off the results I'm getting. You know, it definitely makes it time consuming. Maybe AI itself or someone will come up with a framework that allows you to evaluate things and, and iterate faster or get you to a better prompt faster. But I agree it's very time consuming, which means you can't go through all the different possibilities of what you could try to solve a particular problem or even take on a big enough problem. I think you've said this on the last few podcasts, it's like with the, with the AI stuff, you've gotta think more generally, you've gotta think, well why solve this specific problem when I could solve all problems of this nature? And that's just one subset of what I'm trying to do here.

Michael Sharkey:
It feels like we currently communicate with these language models in Morse code and we need a video chat. Like we need video chat or something. We need a better way to communicate. And maybe that's where devices like neural links come in, we put them in our brain and the AI can read our thoughts, which is a scary proposition. But I even find it just frustrating to just have to tie it like I type all day in, in Slack and, and very, you know, you, you're typing all day. You just get so tired of it. Well, that's

Chris Sharkey:
Why I use my voice one that I made and just talk to it. It definitely speeds things up to be able to talk to it. I mean, there's a bit of lag obviously, but it, it's, it's great because you, that barrier to actually getting in there and typing all that stuff out is gone when you speak to it.

Michael Sharkey:
One other thing I wanted to touch on today is your AI gambling experiment. I don't know how legal this is. This is probably talking about it on this podcast. We'll get us into trouble at some point, but I think here's,

Chris Sharkey:
Here's the logic of actually talking about it. I think the goal of this podcast was always you and I just having a conversation like we would about this stuff and we would talk about this. So I think that that reaches my definition. Hopefully

Michael Sharkey:
That holds up in call. We're not

Chris Sharkey:
Recommending, we're not recommending bets, we're not recommending anyone do anything, but we're just talking about what what we're trying essentially.

Michael Sharkey:
So do you want to give us an update on the AI gambling experiment?

Chris Sharkey:
Yeah, so basically what I wanted to do was, I like to bet on these same game multi bets, which is basically where you bet on individual players saying that this sounds

Michael Sharkey:
Like our first paid sponsor because

Chris Sharkey:
Yeah, exactly. Oh guys, gambling is great. You gotta try it. Um, but yeah, so like, you know, say you know, Jason Tatum to get over 25 points and get two rebounds or something like that and you, you combine all of these things and you get greater odds. Now obviously when you combine, everyone who's done basic statistics or probability knows that the more different outcomes you need to happen at the same time, the far less likely it is to happen. And the bookies are smart, they place the odds. So you are taking way more risk than you're getting in terms of reward, right? So they're, they're, they're stupid bets to make because it's always the house's favour. They only need one outcome to fail, um, for them to win. And you need them all to come true. But anyway, so what I did with the AI was I basically used, uh, lang chain, like we've discussed before, and I got all of the stats for N B A and all of the stats for afl, for all the different players and how they perform as well as the team's performance, the locations, all that sort of stuff.
Combine that with the, you know, the odds of a particular match, how many points are likely to be scored and things like that. And then what I did was starting it to ask it about specific outcomes for specific players. Like, will this guy get more than six rebounds in this match? Right? Then I thought, well, why not combine them and say, well what's the, what's the likelihood of this guy getting that and this other guy getting say 10 rebounds or whatever in the same match because one player doing well might mean the other does poorly or one doing well might make it go better. There's so many possibilities to consider and I'm like, this thing has all the time in the world to sit around and think about this for me. So I started to do that and what I discovered over time is that if you ask it the right questions based on that data, it can do even better.
So for example, you could give it say five different outcomes and it'll say something like, Hey, we can never know the, we can never know the future. This is, this is likely to happen but may not . And you're like, okay, cool. Yeah, I freaking know that. Um, but if you massage it, you can get it to the point where I say, if I was to take away one of these outcomes, like remove one to increase disproportionately the overall, and I use that word in in my prompt, disproportionately increase the chances of of the whole thing succeeding, which one should I remove? And what I've started to do is use a process of elimination and have it remove the things that it thinks are the most risky. Like you're taking too much risk on this one relative to another one. So anyway, last night place my first bet, 4 85 to one gets home ai one from one a hundred percent. So got the second , got the second one on today. I'll be quitting this podcast soon by the way. Cause I'll be rich or I'll be banned

Michael Sharkey:
For AGI gambling.

Chris Sharkey:
Yeah, . So basically this is my future is sitting around playing with an AI model to, to, to bet on sports. But look, it's fun and it's, it's just meant to be fun. It's just meant to see and look, it's a, it's a, it's sample size of ones . So I could also become a degenerate and gradually degrade over, over the course of the next few weeks we'll see , but I'm gonna keep refining it, keep working on it, and look, honestly, it's a great way to play with the technology and see how it goes. And a lot of it is about convincing it not to just caution me on the fact that it can't tell you .

Michael Sharkey:
It's, it's a very like, high stakes experimentation with AI like this. He'll

Chris Sharkey:
Be like, I'm betting my house on this G 50 floor. You better get this right godammit , I commiserate it with it when I lose. It's like, what were you thinking? You should have told me that this was risky. Oh.

Michael Sharkey:
Um, so finally I wanted to talk about a, a very highly watched interview by Sam Altman on the Lex Friedman podcast. And my interpretation of that interview was that, you know, similar to what you said before, they've got this fire hose, they have zero ideas. Like, you know, they're, they're trying to become a consumer app now with the, the plugins. But I also felt like it was like the, the overall message from Sam was, we're the best at this. Were the best ones to work with governments to legislate it and lock it down and, you know, all the other people could be bad actors if they have access to this or, or get as advanced as us, but we're not gonna be bad because it's in our constitution. And we

Chris Sharkey:
Signed a petition, we're

Michael Sharkey:
Sort sort of a nonprofit, but we're also a commercial entity. We're sort of in bed with Microsoft, but we're not, uh, like

Chris Sharkey:
It, it sort of,

Michael Sharkey:
There were so many contradictions in it that I found it hard to follow. I did think that he had a lot of reasonable points though, and the guy must be under a tremendous amount of pressure in his role right now. Just getting flack and, and fire from every angle. Just,

Chris Sharkey:
Just catching up with, with the people at the office each day would be stressful. They're like, Hey, we've invented the following 25 things today.

Michael Sharkey:
Yeah, G P T five is agi, what should we do ?

Chris Sharkey:
It's like, and it's inventing stuff as well. So we've gotta monitor its inventions. What

Michael Sharkey:
Scared me though the most is the proposals he had in it. And he acknowledged they were pretty unrealistic, but he was talking about a US style constitution for AI globally that we all just stop and we get together like a UN summit and we say, here's the constitution. And of course, you know, certain countries are not gonna follow that, including probably the US So the arms race in my opinions just already begun. It's already gonna be outta hand. This six month stop idea is ridiculous and maybe we should all just come out and say it like the first to develop agi. It's like the first country to develop a nuclear weapon. I think that's the reality of the path we're we're headed towards now.

Chris Sharkey:
Yeah, certainly. Because the thing is once you get it, it'll, it'll do stuff, you know, it'll invent things. It's, it really is more powerful than a weapon in my opinion.

Michael Sharkey:
But for now we're just gonna use it for gambling .

Chris Sharkey:
Yeah, yeah, exactly. We can still use it to like write our blog posts. generate

Michael Sharkey:
. Alright, well that does it for this episode. I hope you've enjoyed it. Again, thank you for all of the great feedback we've been receiving, uh, with, with your reviews and your comments on YouTube. We are almost at a thousand subscribers, Chris, on YouTube, which is just mind blowing. I can't even believe people listen to this, let alone as many people as we've had with love the

Chris Sharkey:
Episodes. I love, I I just wanted to say I really, really like people's comments. They're, they're thought provoking. They're, they're interesting and you know, we're not claiming to be experts on this stuff. I really am interested in hearing what people think, how they feel about it. Like that kind of stuff actually stimulates our own thinking about it. So it's just such a valuable thing. I love reading the comments. I

Michael Sharkey:
Also got mugs made. Now people listening on the podcast are not gonna see, but there's an official this day in AI mug and on the back we have hashtag free Sydney. And so I'm going to, um, give the thousandth subscriber on YouTube, one of these mugs. And then I'm gonna pick some random and you haven't

Chris Sharkey:
Even given me

Michael Sharkey:
One. Yeah, yeah. Well you're not getting one till. They do. Yeah. Uh, you gotta think of the fans first, so, so I'm gonna give out some mugs. I don't know how I'll do it. I'll find a way. I apologise to people who listen on, on Apple Podcasts and Spotify cause I, I don't obviously know how to contact you, but if you go and subscribe on YouTube, I'll be able to find you that way. But yeah, I, we, we do want to get these monks out. We're not gonna sell anything, we're not doing this to make any money. We just want to, uh, give out some mugs so that you have the official free Sydney Sydney mug. Thanks again for listening. We will see you next week. I wonder if Chris, it'll be like a doomsday week or a, or a positive week. Who knows

Chris Sharkey:
Exactly. I'm excited. I look forward to it.