This Day in AI Podcast is a podcast all about AI. It's an hour-long conversation on the influence and rise of AI in technology and society. Hosted by Michael and Chris Sharkey.
AI Joe Rogan:
Do you think we'll ever see something like that in the real world? Or is that just some crazy movie shit? I
AI Sam Altman:
Think, um, it's important to recognise that.
Michael Sharkey:
Okay. So that clip is all ai. Chris, we are back and there's big news from Amazon. Amazon have announced Bedrock Titan and the general availability of Code Whisperer, which for those unfamiliar is similar to what we've discussed on this podcast before. Basically co-pilot, uh, for developers to help them write better code and also just produce code based on what they ask it to produce. But this is big news from Amazon. Can you talk us through what this means?
Chris Sharkey:
Yeah, I mean, there's a, there's a many elements to it, which I, which is really interesting. I mean, the, the main thing is that Amazon is announcing their own model, which is Titan, which is a large language model. Of course, it's been available only to the inner elites and companies who, who heard about it early. So we haven't actually gotten to try it, but it'll be interesting to see what results it gives. And then additionally, they've announced dedicated machines, some that are designed for training based on Amazon's own hardware chips called Traum, which is kind of a cool name. And then they have additional instances that are for, uh, inference. So the ones that actually run in production and run the models themselves. And there's a, there's a whole bunch of implications from all three of these things. Oh, and sorry. And then Bedrock two, which is an announcement about basically a universal AI inter, uh, a p I interface, um, for accessing all the different models that are around, which includes several that we've spoken about before. So Anthropic Jurassic two, um, and several others. Stability AI as well. So this sort of idea that you'd have a central entry point and be able to access the different models based on what problem you're trying to solve.
Michael Sharkey:
There was a few, there was a few interesting pieces in this announcement. The first that I wanted to call out, and I'll zoom in here for those actually watching, but I'll read it out for everyone listening, is just this idea of the sort of sick burn in their release about, uh, language models. So while chat, C B T has been the first broad generative AI experience to catch customers attention, most folks studying generative AI have quickly come to realise that several companies have been working on FMS for years. Yeah. Like, it felt like almost like, oh, you know, everyone's talking about chat G P T, we've been quite silent, but of course we've been using this technology for years and now we're gonna distribute it. Yeah.
Chris Sharkey:
Amazon's use of that FM a k a foundation model, uh, language, I think speaks to what they're trying to do here. And if you look at their customer examples, what they're talking about is using their training servers to essentially take these base models, train them on the problem that your specific company is trying to solve, and then deploy them out to their inference instances. So they're taking what I think is a more pragmatic approach into implementing it. They're not trying to be this universal thing for everyone. They're looking at the practicalities of people deploying these things and providing it in a sort of, I guess, more enterprisey production worthy way.
Michael Sharkey:
A couple of weeks ago we covered OpenAI Enterprises offering where they were doing the same thing, uh, partnering with Bain to go into large corporates to do, do a similar thing. But I think what's interesting about this Amazon announcement is that they're basically playing this neutral provider in the cloud where they're like, you know, you can use any of these models as you said,
Chris Sharkey:
Yeah, you can use ours or you can use one of these other ones. And in particular, they're saying, well, a lot of the expense of of getting a model up and running is the training thing. We've got these training EM chips that we've made. We've got dedicated instances that are actually geared to training your models faster and cheaper. And then once you've trained them, jump deploy them on these inference ones which will go there. And I think probably the two things that are most significant about this announcement, in my opinion, are the fact that you can now deploy your own model in a virtual private cloud. So yeah, it's not running on your own hardware, but it might as well be, it's got that level of privacy, no one can get to it. It's, it's protected. Um, and then in addition, you've got the reliability of the speed, knowing that you're not competing for a public API like G P T four, where we've repeatedly seen latency issues. These latency issues go away. When you can control the size of the hardware, the number of servers available, you can actually have reliable performance for your inference models.
Michael Sharkey:
So can you just unpack that for people listening that don't have any idea about what a lot of these key terms mean? Mm-hmm. . So from my understanding, the, the, there's a a couple of different benefits to people trying to deploy production applications using ai. The first would be the security, because your data that you share with this AI model is, is in its own private cloud, so to speak.
Chris Sharkey:
Yes. And it's not just the privacy of the data itself, it's the privacy of the model that you train. So if you train a custom model, um, based on your own proprietary data or private data or whatever it is, the actual resultant model can't even be accessed by anyone, um, as well as the data used to train it. And then secondly, when you're actually running that model, aka inference, like so you're actually using it for something real, that data never needs to leave your private cloud. So your, the data itself is protected. So you could actually still be, have your certifications, you could still guarantee your customers that their data's safe. There is no public cloud risk like there is with using OpenAI right now. I mean, that is what Azure was promising, right? But, but this is here and, and now essentially,
Michael Sharkey:
So does that mean for someone listening if there's like a doctored G P T in the future and they're sharing their medical information, uh, with that product, this just makes it more secure because it's not being fed back into the, the primary
Chris Sharkey:
Model? Yeah, and there's, and there's zero risk that, you know, we, we, you know, we've sort of had our conspiracy theories about are the large public models opening, opening themselves up, so to speak so they can get more training data and more human like responses and things like that. This completely eliminates that risk because it's running on hardware that, that you control and nobody else has access to.
Michael Sharkey:
It definitely brings me back to what we've been talking about a lot though, how what we're going to see is just a lot of different models competing, not necessarily one model to rule them all.
Chris Sharkey:
I agree. It absolutely confirms that theory and Amazon's bedrock, which is their, um, i, I guess universal API toolkit, which is allowing you to say, okay, I'll use stability AI to generate my images. You know, I'll use anthropic for my inference on this one, or I might use Titan, which is Amazon's new L L M because it's better at these kind of problems. Or I'll point my next request at my custom one that's been fine tuned on this specific kind of data. So they're saying, here's this universal interface you can deploy and just, you know, I guess it's been a lock in, they want you on their platform, but it's model agnostic. They're not locking you down to a specific way of working here.
Michael Sharkey:
And so also they announced their own Titan large language model, which isn't available yet, but they said it'll be broadly available in the coming months. They're just testing it out now with a, a limited group of people. I thought probably the bigger part of that, because it's really hard to know how their L L M will perform for things like text generation and a lot of the things people naturally compare to chat G P T or open AI's offerings. Yeah. The second piece of it was the embeddings large language model. Uh, can you talk about the, the, the power of that or what it might enable?
Chris Sharkey:
Well, I mean, embeddings is really about being able to have larger memory for your ai. So access large corpuses of data and then make inferences on those more than can fit in the prompt of a single AI request. So even with say the 32 K that OpenAI is promising for G P T four but hasn't delivered yet, you still can't get enough context information in there. So having things like embeddings allow you to pre, uh, arrange your data such that it could be searched quickly, that data can then be brought in for a particular query based on its relevance, recency and other criteria shoved into the prompt. And then you can infer things off that data. So that's how people are doing things like mass code search and other things, uh, you know, searching documentation for a product or how people are able to sort of say, talk to my data, you know, like, get a big amount of data and talk to it. That's what they're using embeddings and um, the vector databases for.
Michael Sharkey:
So if we're relating this to a human brain, and I know people get upset when, when we do this, but yeah, it's essentially the memory and then being able to recall those, those memories.
Chris Sharkey:
Yes, exactly. And if we talk about the paper later where they had this sort of simulated little village, they went much further into using that kind of memory, um, in order to simulate like, you know, real life agents and how they would remember.
Michael Sharkey:
And it's not just the language model they've released and uh, they, they've also announced new hardware specifically for ai.
Chris Sharkey:
Yeah. So they're training 'em two chips. So they've designed them with specific, I mean, I must admit it's too technical for me, but they're optimised on specific kinds of training loads and things like that. And some of the customers, at least in testimonials, I don't know how much you trust those, but they're saying they're two times faster than GPS for the inference. And I think similar performance on the training. So they're designed to get the cost down on that initial training and to make the ongoing running of the more economical, which as we've discussed before, just opens up the number of problems you can solve. One good example, sorry. Well,
Michael Sharkey:
Does that mean if you have a lot of data? So, so I'm stack overflow, I'm sitting on my data and I'm like, everyone's using all my data to help, uh, you know, teach their large language models how to code and I want to build one. Does that mean I can go and spin up the infrastructure now to build my own, uh, foundational model? Is that what this
Chris Sharkey:
Enables? Yeah, exactly. So you would start with the foundational model and then retrain it on your data, essentially, um, on one of their training instances, and then you would then deploy that out to the inference ones to run. At least that's their vision of how they want it to work.
Michael Sharkey:
And then the last thing in the announcement was Amazon Code Whisperer, which is similar to Microsoft's co-pilot. Um, I haven't actually obviously tried it yet. I tried,
Chris Sharkey:
I tried it prior to the pod. Yeah, I installed it. You have to get this Amazon, um, builder id, which is sort of like, even if you don't have an Amazon account, anyone can sign up for it, so it's free. Um, then you instal the extension I use VS code. So I did it in VS code, uh, activate it. And then as far as I can tell so far, and I didn't play for it very long, it's basically the same as co-pilot. Similar thing, you, you know, you type the start of a function name and then it gives you the body of it. So time will tell, I suppose, in terms of how it performs compared to co-pilot,
Michael Sharkey:
What are the implications of these technologies? Because this is something I'm trying to think through as well. It's definitely, I would relate it to almost cheating on an assignment or something like that. That's what it, it feels like when you use these technologies, you'll be writing some code. Last week I discussed about me trying to make my own, uh, very elementary agi and, and there's a lots progressed in the last week. How's it going?
Chris Sharkey:
Has it taken over your or anything yet? You know, I
Michael Sharkey:
Have, I've been so busy this week. I have not touched it. I haven't even looked at it. And we'll cover in a minute. Auto gbt, that's
Chris Sharkey:
The point, right? Let it sit there and fe yeah.
Michael Sharkey:
It should be sitting developing itself. And unfortunately it introduced some errors that I need to fix . It's just not
Chris Sharkey:
Good enough at programming.
Michael Sharkey:
Yeah, it, well, we could touch on that a little bit later, you know, if these things are ready for prime time. But going back to co-pilot Code Whisperer, because I, there's a lot of discussion this week, you know, is, is the human language, the new programming language and, and you know, maybe it is in the future, like we do more interactions through just talking to the computer and getting it to build, you know, functions or code for whatever we want to do. But it just feels like if you, if you're trying to live in the moment and, and build production quality software right now, these things are, uh, very supportive and can make you, you know, more productive. But in terms of them being game changing, where if I don't know a language, I can just say write in English and build an app, I, I don't think we're there yet. You, you still have to, yeah,
Chris Sharkey:
I read a, I read a paper during the week on self debugging. So this idea that these code models are producing decent stuff, but often the example they used was S SQL L, it'll output SQL L but then you can point out to it actually you've kind of made an error here because of whatever. And they demonstrated through like a feedback cycle where you combine multiple techniques. So prompting it for what you want it to do, then giving it, um, that chain of thought reasoning. So making sure if it provides the explanation before the code, it always does better. I think they said 7% better if you make it explain it before it writes the code. And then following that, if you then give it feedback about where it went wrong, it then produces something that's 12% better than the 7% of the previous one. So these are marginal improvements, but I guess what they're showing is it's actually part of the, the way things are prompted right now that are leading to that better accuracy, which in my opinion makes me think that as models get better over time, it'll be able to do that stuff itself and it won't require that, that sort of interactive thing to get it to that level that you are talking about where it doesn't just stuff itself up and and fail in its mission.
Michael Sharkey:
Yeah, I think the struggle there is with the reasoning capabilities. So if you've seen it, how any of these function, it, it's, it's sort of stepping through like you just described the problem and the model's really good at reasoning the problem out over time. Yeah. But in terms of like a single shot prompt of like, Hey, write this whole app, it's, it's not there and maybe it never will be. I don't think it needs
Chris Sharkey:
To. Yeah. And, and they talk about sort of getting it to, it always gives better results in these in these examples. If it does the reasoning first, which is kind of weird if you think about it, that it actually has to think and plan about what it's going to and then it does better. Actually, that's not weird. It makes sense doesn't it? Um, and it was the same in the, in the sort of smallville simulation where it, when it got the characters to plan, they performed more human-like activities. So that's interesting. And, but it makes me wonder where the models themselves, it's sort of a weakness in the, in the prompting, you know, the, the whole instruct thing on G P T three and beyond, um, was a new interface trained on relatively small amounts of data. And it feels like that part of it, the prompting methods are in their infancy and we're gonna see that get better as we get closer to the agi.
Michael Sharkey:
Yeah. So do you think that the, like we just need a lot of prompt engineers or do you think the actual transformers that enable us to interact with these ais, that's where we need to advance?
Chris Sharkey:
I think it's the latter. I think that it just needs to be better in terms of interpreting how it goes about solving the problems rather than relying on the humans who are using this technology to, to get the right incantations that will get them consistent results to measure it with what they're trying to do.
Michael Sharkey:
It goes back to last week's show where we talked about there's all this, uh, emergent behaviour, the more data used to train a model on, and there's all of these capabilities in these models and we're just seeing the surface level of it because we don't even know how to interact with it in a lot of ways. So I, I think you are right. The limitation is sitting there. I know, I, I think that's why if you look at what's the constant in all of this, it's, we're trying to develop better memory for the, for the ais. We're, you know, we're trying to give it bigger token sizes on input to get more, uh, like better output from it. It just seems like these, this is where all of the problems li right now that people are, are trying to solve rapidly.
Chris Sharkey:
Yeah, and it's interesting because if you look at the announcement about Dolly, which is the thing from Databricks where they got 15,000 questions and answers made by actual humans to train an existing large dataset, I forget the name of it, but it's a, it's a open source large dataset with 12 billion parameters. And so they were trying to replicate what chat G p T did with the prompts. So that turned it into that sort of chat style bot over just a text completion thing. And so they used 5,000 employees to generate 15,000 questions and answers. And they talked about how difficult it is to get those questions and answers, right. To be unique enough to generate something that works the way it does. So it really seems to me quite, I don't wanna use the word amateurish cuz it's not, I'm not trying to be condescending to their work. It just seems a bit primitive in terms of getting it to just to behave in a, in a way we sort of want it to, you know, it's sort of like an ants standing on a mountain. Like we know that all that knowledge is within it and that capability, but how we access it, we're really struggling with.
Michael Sharkey:
Yeah. And is language the best interface to access it? Is there another way to access the capabilities of it a better way to communicate with it? Because language is hard, like it's not an easy skill to what
Chris Sharkey:
Examples, what do you think other ways would be?
Michael Sharkey:
I don't know,
Chris Sharkey:
I I sort of ideas of like, you know, prompt compression, like can you sort of create a domain specific language whereby you compress the prompt input. So in your same 8,000 tokens, you could get a lot more information density in there, but you could tell the AI in very small amount of instructions how to unpack that data into something that it can understand.
Michael Sharkey:
I, I don't have the reference material for this, but someone during the week did actually do that, asked uh, G P T four to figure out how to help it compress the, the, the inputs. And it came up with its own like full fake language that they could, uh, you know, so they could compress the prompt in. And, and again, we're getting very technical here, so let's just back up a little bit. I think that this,
Chris Sharkey:
Just to wire this back to like, you know, Amazon coming out with these instances, like on-demand instances that have this training capability and, and a really strong and fast training capability and the advent of these open source models where you can get both the source data and the code, people are gonna be able to experiment on this aspect of it now. So it's not just prompt engineering for G P T four, the only thing, the only interface we have, people are actually able to go back and retrain the models relatively quickly to see other ways of, of doing this. So, you know, you could fine tune one a, a model to work on a more compressed format, for example.
Michael Sharkey:
So is cost going to be a factor though here if you're going in experimenting with like re retraining, uh, a model? Is that, yeah,
Chris Sharkey:
It's something I've been thinking about a lot. And one of the examples that I read in a hack and news comment regarding the, um, the Amazon announcement was about DoorDash and they were saying door dash's, uh, machine learning stuff makes something like 10 billion inferences a day. And based on the pricing of, um, one of the existing models that would be 40 million a day, uh, if they were to use like a regular API to do that, now obviously they'd negotiate and they'd end up with dedicated stuff. But I guess what it's showing is that the future of public high-priced APIs is going to be short because people will be able to have their own hardware and predictably cost it out. So you can say, well, we know we need to do approximately this many, this is how many Amazon instances we have, we get reserved instances and then we can run it at a reliable speed and a reliable cost.
And similarly with the training, right? Like if you know how long you need to run this machine for and you can get an on-demand instance to just run for that period, to do that training, you've got a lot more predictability on the cost of that. And then that obviously has to be contrasted, is like, do I go out as I was sending you sort of like, you know, fantasy images of computers I wanted to buy for like $360,000 this morning? Do I get my own box and run it in my room? Um, and then I don't have to stress about the ongoing cost of it. So, you know, there's gonna be that trade off versus the cloud machines.
Michael Sharkey:
But it, it seems, if we're looking at this through the view of what does this actually mean for the progression of AI with these announcements, it means that it's easier to potentially easier to release production quality applications into the cloud. Now retrain models, uh, you know, have access to better ways to
Chris Sharkey:
Get,
Michael Sharkey:
Get more data into the prompt. Well, it'll
Chris Sharkey:
Give people confidence to deploy things that are production worthy. So, um, you know, at our own company, as you know, our director of engineering, Dan is constantly at me for the lag of G P T four and he's saying we can't have this in here because it times out all the time. You've gotta have these long retrial loops, the latency, unpredictable and all this sort of stuff. Having these models running in Amazon, in in a way that you control it. There's no contention. Like a lot of the issues we've spoken about over the weeks with G P T, the GPTs, uh, are alleviated by this.
Michael Sharkey:
Yeah, it seems like a lot of the problems right now around, uh, the, the latency as you said, and also my own experience working with G P T four through open AI's APIs is just that it is, it's totally unreliable where it just stops, stops working and, you know, fails, it's like too many requests all of a sudden. And so you just can't put that into production and also anyone to use it.
Chris Sharkey:
Yeah. And also just, you know, unpredictable latency like that really limits the, the sort of, uh, level of application you can make. Like if you are looking for that sort of live interactivity or that, you know, chatbot agent experience and things like that and the latency can blow out with retry to like 20 or 30 seconds, what's the point? Like, you know, you just absolutely can't do real things with it. When you are, you're experiencing that kind of unpredictability,
Michael Sharkey:
That speed I think is the breakthrough of chat G B T for, for everyone that's been using it is being able to type a question, get an answer, ask a follow up question and go deeper and deeper really quickly. If there was huge, uh, latency on those chats, which sometimes there is with G P T four, then the experience and the thrill of it really breaks down and it just becomes completely unusable. And when you're trying to do that eventually, like we talked about last week with meta segment, anything where you want to apply that to computer vision or potentially use prompts for running an application, then all of a sudden, like it's a big enabling effect. I think getting some certainty around, you know, the API functioning at a certain speed and, and all of those capabilities that this introduces, how do you, what do you think this means for open ai? Because yet again, this week we're not really talking much about open ai. Uh, and we're talking about Amazon now with a whole bunch of different models. We're talking about an open source model from Databricks that was trained on 30 bucks. Like that's how much it costs them to train it. What does this mean for them? Uh,
Chris Sharkey:
It, it's gotta be scary, right? Because you've got Amazon who's taken like, and we kind of predicted this would happen, like the measured mature response, take time to get it right, announce something significant and commercial immediately. And you know, I just have the instinct and I'm sure everyone does that scaling is something that Amazon does well and it's something that open AI is struggling with. So I think that this is a real blow to them. Like even with the, the might of Microsoft Open AI clearly is having scaling issues. I mean, that's exactly what we've just been talking about. So whether they can overcome them soon whilst working on all the other stuff they're doing and remain a major player in this market is yet to be seen. We
Michael Sharkey:
Talked about this before though, I really can see them becoming the Dropbox sort of consumer app in the, in the market now as opposed to being the infrastructure provider. I mean, maybe they execute well through Azure with Microsoft, which is Microsoft's competitor obviously to, to Amazon's cloud services. Yeah, maybe, maybe they, they, you know, have some products in there, but that's really not their core business. Their core business is like being the next Google where, you know, you go to chat G P T, you've got your plug-ins, it's getting smarter, it knows more about you and that's just your sort of consumer AI interface with the world.
Chris Sharkey:
Yeah, you might be right, because the thing you notice in start contrast to Amazon, when Amazon announces this, they've got product pages for everything. They have procedures for everything. They have prices for everything. Everything's consistent with the rest of their offering, right? They have a UI for it, they have an API for it, whereas OpenAI, you know, their UI that you log into when you're experimenting with stuff, it hasn't changed since when they launched at all. You know, they've added a few little things in their, sorry, that's not true. It has changed, but not very much, hasn't improved much. Their API interface is extremely basic. You know, they've got good python bindings and things like that, but really they lack all the stuff around it to be a real commercial player. Like for them to, you know, have these custom deployed models, you can see why they need Azure in order to do that. But then are we really talking about open AI at that point or are we talking about Microsoft, like is it Microsoft versus Amazon or is it open AI versus Amazon in this battle? Because I don't see open AI being able to win this side of it on their own.
Michael Sharkey:
Yeah, I'm not trying to put shade on them or anything like that, but it's just really hard for me at the moment to understand how they continue to win long term without creating some sort of addictive consumer product that's embedded into our daily lives. Like you look at what Microsoft's doing with their Bing chatbot, they just put it into Swift Key, which is the keyboard on iPhones and Android devices. So now you can access sort of chat g p t like style with search from the keyboard itself. Yeah. And you sort of think, well, okay, if they're doing that and they've got a large, like this being distributed everywhere, what role does chat g p t play here? Like, does it become really popular and there's a lot of plugins and that's just where people naturally go, which is probably why they're ha having all these scaling problems because like everyone actually is using it.
Chris Sharkey:
Yeah.
Michael Sharkey:
You know, but then does Google come along with their, their tremendous amount of training data and just, it's like late mover advantage. They can see what they've done that works. They, they have a much better trained model with all this data and then they just embed it into, into all of the products like Google Home Out, like search just everywhere, a as fast as possible, which let's be honest, that's exactly what they're doing now. And then they just wipe them out and we never talk about open AI again. It was like, oh, you know, they kicked off this revolution Good on them. Well, I
Chris Sharkey:
Mean, did they though? Because I think the thing is that there's probably been companies working on large language models, you know, at a similar scale in the Shadows not coming out. They were just the first one to get out there with the, the sort of the largest one here to four and went out there publicly and made a big show of it. And, you know, it explains a lot of the behaviour we had discussed in our earlier podcast with them just announcing something, announcing something new and really trying to be the one that everybody credits with all of the creation of this technology. And it makes you wonder like, are they, or are they just the first one to really, you know, get themselves out there and be known for it?
Michael Sharkey:
Yeah. They seem to have a lot of, you know, high profile personalities behind them as well. People know the names of their, their data scientists and their, their cto and like the various people in the business. A lot of people don't really know the people working behind the scenes at Amazon or Microsoft. So is it just that these people have built up big profiles and, and they're well known for it? Or are they, are they gonna just be able to outsmart and out-maneuver the bigger guys because they're able to execute much faster and just get things done? Because it they definitely have disrupted everything. I mean, they, yeah, they've made Amazon deliver Bedrock and Titan. They've made Google deliver Bard, uh, and, and, and they've really ignited the whole market around this or, or at least bringing it into popular culture.
Chris Sharkey:
And the good news is that we are seeing a proliferation of models and large models and equivalent and good models. And, you know, there's enough stuff flying around now that a lot of the fears I spoke about in earlier weeks about us being denied this technology yet being cut off or starting to look less serious because, you know, there's just so much of it floating around now and expanding that it would be hard to just stop dead in its tracks at this point. Certainly no one heated the warning to slow down on training things. Yeah.
Michael Sharkey:
. So we, we talked last week about my project where I was building, uh, an elementary a g i, even though it's, you know, not obviously not even close to agi, I, but it's just fun to call it that. And I was able to get my, uh, a AGI to recode its own, uh, its own code. So rewrite and improve its own code through instructions. Yeah, I w I asked it to do things like add, uh, dli, which is open AI's image creation tools, so I could create images using it. I, I asked it to help me, uh, browse the web and get more data from web results and, and do all these things. But during the, the week, and I think a little bit prior to that, we had these two huge open source projects come to life. One of them was auto G P T, and the other was Baby a G I. And this was a group of people trying to do exactly what I was trying to do. But you know, they've obviously taken it much more further and made it a lot more advanced. And I have used them, uh, a a little bit. And I think, you know, there's a lot of excitement around playing around with these open source projects. And let me just quickly explain what they do for those that, uh,
Chris Sharkey:
Yeah, I'm curious too. Cause I must admit I hadn't heard of either of
Michael Sharkey:
These. Yeah. So there's a, a version of this called Agent G P T, and I'll put it in the show notes. You can actually use it yourself. So it's very similar to the chat G P T interface, except you give your agent a name and you give it a goal. So I'll call it, um, in here pizza G P T, and I'm gonna give it the goal, order me a pizza from, uh, dominoes. And so then I can deploy my agent. I I'm obviously, it's probably not gonna pull this off. If it does, it'll be amazing. Oh, you might
Chris Sharkey:
End up with a pizza. That'll be all
Michael Sharkey:
Right. So now once I deploy the agent, it says thinking, and then it's had an error. So this is good. It's all . Um, my goal was not within the model's parameters or something like that. So let's try something better. I'll, um, uh, I'll call it, um, let's call it like podcast g p t research, good podcasts, . So we'll deploy the agent and hopefully it'll work the same. So it says, added tasks, collect data on the most popular podcasts across different genres. Added task, analyse listener reviews and ratings to identify the best podcast. So it basically comes up with tasks based on that goal, and then it goes and executes and it has all these extra capabilities over a language model. For example, it can go and search the web, it can reference a bunch of different, uh, tools that have been plugged into it. So you could imagine plugging in a calculator and, and various other things not too dissimilar to plugins in chat G P T where we saw things like Instacart, where it could go and execute your, your grocery list. So the, the idea here would be, and I could
Chris Sharkey:
See, I could see very easily with like text to voice generation and, you know, with the speed of some of the hardware now actually getting it to ring people up, like, make me a reservation at this thing, or like, book a romantic night out and it can go make the reservations and those kind of things for you.
Michael Sharkey:
Yeah. So essentially what it's done here has given me a list of podcasts that, uh, consistently received high praise. And I, I've got that list. So I was able to go do that research, get it going in the background. Now I think a lot of people are struggling with what this actually means. On one hand, a lot of people are saying you're just putting open AI's G p t four in a loop. Yeah. And you're just, you know, prompting it in certain ways. Um, but there's an another cohort of people that are obviously more in the group of, well, you know, it, it can form memories. So you can store these, uh, memories in some sort of vector, uh, database and then recall them as part of the query so it could get smarter over time. But I'm yet to see any examples of any of this stuff working.
And I talked about this last week with my own example. It really starts to struggle and loses its its reasoning capabilities after a while where it goes off in, in pretty wild directions and sometimes it loses focus. So I'm, I'm not sure that it's there yet, but I think what it, what we are seeing is the emergence of the next, probably the next iteration of these ais, which is the concept of agents, which we've covered before. Yeah. Which is go and get this agent to go do stuff for me, like read all my, the files on my computer, read my emails, and then tell me what I should focus on today based on, you know, all of all of this data. Yeah.
Chris Sharkey:
And sort of brings up that, um, Smallville paper where they made a little world of, um, you know, little sim creatures and had like a, you know, a bar and a bakery and a houses and dormitories and stuff. And then they programed these agents and told them who they were and what their relationships with people were. And then they wandered around by taking action. So they had sort of like a, you know, loop where they would make a decision at each time of the day as to what to do. And their big challenge in that was the memory, right? Because obviously if you tell an agi here's the current circumstances, uh, what should I do, then it might be like, go brush your teeth. You know, it's the first thing in the morning, go brush your teeth, right? But then you ask it again a minute later and if it doesn't remember what it did or it didn't have a rough plan of what it was gonna do, it's gonna go, oh, I'll brush my teeth and just keep doing it.
So that was one of the things they had to overcome is getting it to have this sort of stream based memory where it will remember the actions, but not just that when it would recall recent actions that would ha you know, favour things that were recent. And then over time with those memories, it would build a reflection. So for example, if it kept observing this guy studying and talking about maths or whatever, um, and that happened for say, four or five times, it would then draw a longer term influence that this person, you know, is really interested in researching mathematics, and then that would be a more permanent memory and things like that. So they experimented with those kind of things. And while I think that the results of this one were a little contrived, um, I think that's where we need to get to, to get these sort of independent agents, they need the ability to have a sort of long-term nuanced memory of things to be able to be useful on an ongoing basis.
Michael Sharkey:
Yeah. Just to, to recap what we're talking about for those unfamiliar, there was a, a pretty highly publicised paper on generative agents, uh, released during the week, uh, by a bunch of people at Stanford and Google Research. And what they did was they built a little video game village, uh, called Hobbs something. Was it Hobbs?
Chris Sharkey:
I think they called it
Michael Sharkey:
Smallville. Oh, Smallville. And there was like Hobbs Cafe and like all these different, uh, places that these little characters could go around and manoeuvre around, and they would have a backstory, very similar if you've ever watched Westworld on H B O, where they give the robots a backstory and then they go and interact with the guests in the, in the park based on their, their backstory in Westworld. And you could probably imagine it very similar to the Sims is the best way to think about it. And they ran experiments in this simulation with giving a, a certain character a narrative that they had to go and execute. Uh, one of them I'll bring up on the screen now was this character Isabella, and she was to plan a Valentine's Day party at Hobbs Cafe at a certain date and time. And then what happened was that was shared through the different characters in the town, uh, all running on LLMs with this memory stream.
And some people decided they were gonna go to this party. Uh, other people decided they, uh, weren't gonna go. And then some of them also wanted to take a date because they were in love with another character. So, so all of these backs, stories were connecting and a lot of the conclusions that they had was, it could make for more immersive characters in video games is one particular, uh, example of how this could be beneficial. Where you could have a video game where you sort of give it plot lines or you disturbed the town or village, and then the whole narrative changes. And I honestly as a, as a, a gamer myself, I love the idea in Grand Theft Auto, for example, of having in the game me going in like, you know, blowing something up in the town and then that influences characters. Like, did you see the, the big explosion that just happened in the town? I mean, it, it's really interesting.
Chris Sharkey:
And they, they talked about that as, as some of the emergent social behaviours they observed in it were that information diffusion. So the person planning the party that quickly became the talk of the town. You know, there was a political candidate running for mayor or whatever, and everybody was talking about the relative opinions and the, the other emergent behaviour they observed was relationship memory. So they'd remember things people had told them before and asked them about it and things like that. So I could imagine like just thinking fun things in your one. It's like, you know, you could be consistently cruel to someone and then see how they behave to you when you later need their help or something like that. Like, I agree, I think the, the applications of it are on that level would be
Michael Sharkey:
Quite fun. it could make for the best video game literally ever, because just screwing around with the plot of this simulation would be great. But then there's a part of me that watch Westworld knows that these robots get scarred from being mistreated and then eventually rebelling kill us all. So , it's like, yeah,
Chris Sharkey:
Yeah. I mean, then it comes to like, do you edit their memories? It's like, this is really inconvenient that you remember that .
Michael Sharkey:
Yeah. I'm gonna just go in and edit it.
Chris Sharkey:
The race party remember, or, or, or plant memories. That's kind of cool as well. Think this is the initial prompt, isn't it?
Michael Sharkey:
Yeah. There are a few things that came out of it for me that I found, like looking at this paper and then saying, okay, what does it really mean? There's all the obvious case of video games, which is, would, would make great characters have better, uh, you know, sim simulation, uh, in games and, and things like that. But then this idea that came out of it for me was like, what if you use this to predict things like the economy? So you ran a mini simulation of how people would react to inflation and what the characters do with a backstory that resembles some sort of small iteration of society, uh, maybe in the US or New York, or yeah, develop some characters. And then could economists use this to do mass simulations on the populous? Could governments use it to do mass simulations to see how people predict and how they spread information?
Chris Sharkey:
Wow, you're a lot smarter than me .
Michael Sharkey:
Well, I mean it's just, it's the natural, natural next step, right? Yeah. And then also if you want to predict things, and this is a great segue into giving an update on, on Gamble. Oh, right.
Chris Sharkey:
Because you could have, you could have hundreds of thousands or millions of them with that information injected in various area areas.
Michael Sharkey:
You could simulate the whole population of the US
Chris Sharkey:
And how many people do I need to convince of an idea to make society wide?
Michael Sharkey:
Yeah, you want your idea to go viral. It's like test, test out
Chris Sharkey:
Spreader who are, who are the most important people in society to influence in order to disseminate this idea? This is profound.
Michael Sharkey:
There's so many directions to go in to talk about here. Like for example, this idea of simulation theory that a lot of people believe that we're living in a simulation and
Chris Sharkey:
Well, I'm not .
Michael Sharkey:
No, I know. And the, you know, we're just like the a, you know, deep in iterations of simulation, it ties into the matrix. Um, but then now we're creating simulation. So what's to say, some other higher level version of us didn't create the simulation. .
Chris Sharkey:
Yeah, yeah.
Michael Sharkey:
, you know, that worry now. But going back to my idea of using it for, uh, predicting, I just don't think it's that far off being able to simulate fairly realistic environments of social behaviour.
Chris Sharkey:
Stop the podcast and start working on it. I'm taking notes. This is, it's a really, really interesting idea is you get, you get a simulation of the kind of relationship environment you want to go like at a workplace, then you look at places you inject the information cuz they did this in the experiment, they brought a human agent in who would then go ask the characters questions or tell them something or whatever. Is that sort of inducement to get, you know, the, the system running. But yeah, it's a really, really cool idea. And it doesn't just have to be with people. It's, it's like anything that has a sort of relationship and is able to communicate, right? Like any sort of simulation. But
Michael Sharkey:
Imagine political parties and political scientists just creating a simulation of voters. Yeah. Instead of a focus group, you just create a simulation of voters and then you test different policies on them and see how they react, see how they describe that policy to other people. I, I think this would be an invaluable research tool. And you, and you, you
Chris Sharkey:
Could see it, you could see it with polling and you could seed it with like real world characters that, that have, you know, a lot of background information and things like that. But
Michael Sharkey:
That's the power of these models, isn't it? You can do fine tuning based on very few examples. So you could do fine tuning of voter profiles based on a few examples of how those voters behave to spawn more people in the simulation it seems like.
Chris Sharkey:
Yeah, and the thing is like if you tested an idea in a simulation like this, you might not always get conclusive results, but it might be enough to adjust what you do. Like, oh geez, if we do that, that's, you know, this is gonna end in disaster. Um, whereas if we do this other slightly different approach to how we do this, it's gonna work out. And
Michael Sharkey:
You wonder too, with advanced ais like treading towards agi, like do they just run simulations constantly to simulate like, what would happen if I do this? All right, I'll run a simulation on humanity to see what happens to me. Yeah,
Chris Sharkey:
There's a, there's a great Michelin web sketch where it's like the politician and the economic people coming to him, they're like, we've tried everything, nothing works. He's like, have you tried punching into the computer, kill all La Paul? And they're like, no, I would never do something like that. And he's saying, I'm not saying actually do it, just, just put it in there and just, just see how that comes out. .
Michael Sharkey:
Yeah, I, I don't, I don't know what what we'll see from this, but to me that was what stimulated my brain from the paper, not the paper itself
Chris Sharkey:
Because we've definitely so far only really talked about these agents in the sense of either evil things that are gonna sit there percolating, trying to take over the world and have some multiplicity thing where they develop their evil alliance or agents that are helpful and try to help you like book, book a restaurant. You know, we haven't really talked about the idea of agents in a different environment other than our world. Like why does it have to be, um, why does it have to be things that are sort of having actuators in the real world? The simulation is a perfectly, uh, good application for them as well. Maybe
Michael Sharkey:
This is where things actually head where if you want to develop a new material for manufacturing, it's like we'll run a simulation of engineer based agents, get them to work with each other in a social setting and therefore conclude that this is the new material we should build then simulate that new material being used in some application in that virtual world. Yeah. Yeah. And this is why we're probably living in a simulation
Chris Sharkey:
. And it's funny cuz I know earlier you were looking to transition to talk about my gambling update of how my, my thing's doing and I just wrote down the words, it's so simple, simulate the matches, gotta simulate the matches like sort of Montecarlo style simulation, but with AI based on a play by play of every single previous match these teams have played and just run enough simulations to work out what's what's likely to happen. But yeah,
Michael Sharkey:
I think that's how you could do it. Imagine a horse race and you just like, you just literally run the race, like run the race like a billion times and you surely would get a conclusion who the likely top three would be. Like, I just
Chris Sharkey:
Don't, well maybe I'm sure it's more nuanced than that because it's about like the different input information, but it's not gonna hurt. Like if you have record, I mean horses I don't know is a bit different cuz it's not really like updates I guess if you knew where they were at each different position and what the temperature was and all that sort of stuff. Maybe, but at least in the sports cases, I'm talking about all these matches have play-by-play of exactly what happened at what timestamp you could, you know, you can re like look at those different combinations and let it work with that.
Michael Sharkey:
So it sounds like we're gonna get a really good gamble g p t update next week after this
Chris Sharkey:
Yeah, this week's update's not good and I know a few people asked for an update. So basically my first, um, couple of bets won and then the next couple lost and I got really disheartened with it because the, the reason they, the reason they lost, you know, only in my opinion not the AI is that dramatic things happen in the match. Like, you know, one team got like a hundred points up or whatever. So the playing style of the other team just completely changed outside the norm because like obviously if you, you know, you're so far behind, you're not gonna win, you're not gonna like run your best players at their max and things like that. So I started to think about I need to shut it down and retool and start to think about those more nuanced things and have the AI think about those sort of contingencies.
Like okay, this is the likely, um, like ranges for these players to score whatever the outcomes are, but there's wider forces acting than that, that that can lead to the sort of black swan events or outliers and the knockout blows that really the book is accounting on happen. Like if you've got 10 different outcomes that have to happen, you only need one to be affected by something like a play gets injured or whatever and, and they win. So it needs to be a bit more subtle in those things and probably look for, you know, much higher a odds scenarios that are just a bit more likely to happen than what the odds are. So I'm sort of still working on it and, and I think that I need a bit more time thinking before I start to apply it because right now it's just too simplistic to, to get meaningful results.
But this whole simulate the match idea, I'm really gonna look into that, you know, the players as agents, put 'em in a little world, let 'em play the match a few times. Let's see what happens here. What happens if they get ahead? Do they take their players off, you know, and having that play-by-play match history fed into it in a sort of, um, you know, memory style like they did in the sims games where they remember the interactions with the previous teams they played, they remember what, when they took their key players off because there was a differential in the score. Like those that information's available, like, you know, from those matches I lost that information's available in a simulation that's going to happen at some of the times and we need to take that into account. So I think you're right, I think the next update might be a bit better than, than this one. You
Michael Sharkey:
Also mentioned when we're talking about this before and like I know many people want access to is the true vision capabilities of G P T four. If they ever deliver the what could you, how could you use that for the the
Chris Sharkey:
Well part of the the gambling scenario is you need to know what the odds on an outcome are. So like, you know, if a particular outcome is, you know, in the AI's opinion really likely to happen, but you're only getting paid like a dollar and 1 cent to one for that to happen, you are taking a lot of risk that on the off chance that doesn't happen because the guy gets injured or something. Um, uh, it doesn't happen. You knock out your entire bet like no one's gonna bet, you know, a million dollars on an outcome that's 1.01 to one because those things fail sometimes, you know, and the, the, the time it fails and you lose all your capital isn't worth the risk. So knowing the odds of the legs is really important. So you are like, okay look, this is happening two thirds of the time, but you know, we're getting odds, true odds of two to one or something on this thing.
The AI could definitely identify those things where you're getting good value for money. Now the thing is the bookies don't want you to do this. They make it really, really hard to get that odds data, like there's no APIs for it. I think there is, if you pay a fortune in money, like you can get 'em, but you know, it's not something you would do casually like I am. So I thought, you know, scrape the data very hard to do in the browser because you know, you've gotta load it. All the things are loaded dynamically with JavaScript and like it's a complicated thing. So then I thought, why not just take screenshots and then use uh, character like optical character recognition to do it, tried doing that, but the results are mixed and things like that. And then I thought G P T four has the image recognition capability. If you had that, you take screenshots, you plug 'em in, you ask it questions directly about the screenshot. It doesn't even have to translate it into text according to their models. And then you would have that capability
Michael Sharkey:
And this is why they haven't released it. just to,
Chris Sharkey:
Just to slow me down. Well I'm, I'm not racing. That's, it
Michael Sharkey:
Does open so many possibilities and when they finally do release it, it'll be pretty exciting because you just for data capture, like, like literally being like using our eyes to capture data now, now yeah. We can, we can get access to, to data from any image and I mean and the
Chris Sharkey:
Multimodal aspect aspect, like you're not translating the image into text first. It's like directly working with the source data, which I think is better. Well it is better.
Michael Sharkey:
Alright, so I want to play you a clip from a Joe Rogan episode and then talk about it. So I'm gonna play it now. I'll bring it up on the screen for those that watch. But you'll also,
AI Joe Rogan:
I mean those are definitely big questions we gotta answer. Yeah. This whole AI thing, it's just fucking fascinating, man. You know, like in those sci-fi flicks, like ex McKena where the AI powered robot plays the guy like a fiddle gets him all lovey dovey and then just leaves him to rot in a bulletproof glass room. Fucking chilling. Man. It's crazy to think about the potential dangers of integrating AI into robots like that. Do you think we'll ever see something like that in the real world? Or is that just some crazy movie shit?
AI Sam Altman:
I think, um, it's important to recognise that.
Michael Sharkey:
Okay, so that clip is all ai, that's the, someone created the Joe Rogan AI experience, and that is complete fabrication of Joe Rogan speaking with Sam Altman. The script for it is written by, uh, G P T four, I think, and the all of it, the voice, uh, uh, synthesised with ai and it's insane. I, I first heard that with just clicking on the link and on initial listen, could not tell, like for the first time was like, this sounds exactly like Joe Rogan. I mean, I
Chris Sharkey:
Wouldn't have been, I don't listen to it often, so I wouldn't have been able to tell. Like, it's so realistic. There's just, it's hard to fault it. Yeah. And it's so long, like whenever I've used, um, text to speech, things like taught us TTS or what was the, there was another one floating around for a while that you could do online where you uploaded the samples. It really struggled when you gave it longer pieces of text. Like, you know, it could do a sentence really well and sound like John CLEs or Patrick Stewart or something, but as soon as you gave it something longer, there'd be a weird word in there. Or they'd stretch something out like that. We can't go to the movies. But that one is just, it's perfect.
Michael Sharkey:
It's almost like he recorded it and they're, they're sort of doing the inverse and just lying to us that it's fake , but it's really good. And it honestly, I think this week we've heard a lot about deep fakes like this where, you know, you could almost imagine a scenario where someone dies and if you've got enough recordings of them, you could produce content using their voice and eventually video and images forever. Like, there's nothing to stop you doing this now. It seems like this is on the horizon.
Chris Sharkey:
Yeah, and as I said to you earlier, all that I really want this technology for is more Seinfeld episodes.
Michael Sharkey:
Yeah. Like creating more episodes of a show you loved, uh, with AI or, or you know, like certain, certain movies you watched just creating sequels that you never got the sequel for. I mean, this idea of, of personalised media and, and the consumption of it now is becoming a reality so, so quickly.
Chris Sharkey:
Yeah. But then there's the more sinister side of like, you know, rewriting history, adding a few new Einstein quotes that he said back in the day. Like, you know, it's gonna, it's gonna come back to that thing we said earlier where you really need the human tick of approval. Like this content made by a real human, not an AI simulation.
Michael Sharkey:
Yeah. And what I don't get here is if I'm Joe Rogan and I watch this and he retweeted lets, obviously he's listened to it. He's got
Chris Sharkey:
A great attitude. That
Michael Sharkey:
Guy. Yeah. So he, he's, he's listened to this and I'm thinking, well, anything I say online now could be fake. Like you could, he could watch clips of himself or, or listen to clips of himself now and think, did I actually, would he remember, I mean, he's done that many episodes and that's
Chris Sharkey:
A thing people can do it of you without your permission. There'd be enough audio of us on this podcast by miles to simulate our voices. So you can't really stop, like, that time my friend made a hundred profile images of me by uploading like all my Facebook photos. You know, , there's sort of, um, to some degree you can't really prevent it even if you don't want it to happen.
Michael Sharkey:
Yeah. So I think there's obviously the malicious use cases of this where you can train it on, on people's voices. And we saw, look, this my source is the New York Post, which I don't really trust, but let's just assume it's, it's real for the sake of, of this. So what happened was, or alleged to happen is someone used an AI to clone a teenage girl's voice, call her mother as the girl, uh, sorry. As the person who allegedly took a hostage. None of this of course happened. Hmm. And then her voice is in the background, which is all produced by AI being like, you know, mom help, uh, they've got me. That's horrible, yada yada. I won't go into detail about what they said to this mother, the poor woman. It's horrible. And they demanded a million dollars ransom. And she said it, it's quoted in the article, I never doubted for one second it was her. And, and so these deep fakes are getting so good. People are using,
Chris Sharkey:
Was she okay in the end?
Michael Sharkey:
She was on a ski trip. She was fine that once they called the police, the police just tracked her down straight away. She was fine.
Chris Sharkey:
God, I'm so stupid. I thought they had
Michael Sharkey:
No, no, no. It was all just AI and, and fake Wow. See you fell for it. Yeah.
Chris Sharkey:
and I, and I need the story. I know. I'm like, what's going on? Alright, well if
Michael Sharkey:
Anyone wants to, it's been a long week for us. We've been working on something, we're a leasing cinema. We're both wrecked from it. But, um, yeah.
Chris Sharkey:
Wow. But yeah, you can, you can, you can think of a lot of things like that. Like calling people up masquerading as somebody else sounds quite trivial.
Michael Sharkey:
Yeah. And like, people were giving examples online as well of like cloning a bank manager giving you a call and saying like, Hey, you know, there's been a problem with your account. I I need you to do this. Or, or calling you. Yeah.
Chris Sharkey:
Not to mention I need to verify all your details, um, just to make sure this call's secure.
Michael Sharkey:
Yeah. So it's not just like verifying links in an email anymore for security. It's actually like how do you verify people's voices and as this technology just gets better and better, it's, it, it, how can we distinguish it? We can't. And
Chris Sharkey:
There's a social element to that. Like if someone calls you up and you recognise their voice, you're not gonna like, hang on grandma, I've just gotta check. It's really you like, you know, you, you can't really be suspicious of every single person you talk to, but you might need to.
Michael Sharkey:
Yeah. And then asking them of things that might verify their identity. Sure. But then I think the AI and the AI voices could get so good that they can pretty easily a adapt in real time, uh, to whatever they're being asked and, and start to manipulate you. Well,
Chris Sharkey:
I mean we've seen for years those attacks where, you know, you send the finance department of a company an invoice, um, and they just pay it because that's their job. Like, you know, ostensibly from the boss or you know, whatever it is. Um, or phishing attacks where you are like is the boss, you're like, can you please, you know, pay for this flight and it's a fake link or whatever. Imagine that with voice, it's like, Hey, I need you to go down to office works and buy me 20 gift cards please and send them to this address. Like, I, I could, I could see some of that getting through the social engineering side when it's voice.
Michael Sharkey:
Yeah. I mean obviously in this example, someone's gotten enough of, of this particular girl's voice to train the model. They would've needed a pretty large sample size to make it realistic. They've obviously prerecorded all of these clips and just played them in the background because the technology's not good enough now to do it in, in real
Chris Sharkey:
Time. The real time. Yeah. Cuz I was gonna say to you like, you know, I'd love to know the tech that they were using to make it real time, but obviously they wouldn't have needed to. You just have pre-recorded ones and play them when you need. But I mean that's similar to like the sort of soundboard style prank calls that we saw in the early two thousands and stuff like that where, you know, they used Arnold Schwarzenegger's voice outta kindergarten cop, you know, like you lacked discipline and all that and you just have enough samples that you can get through a conversation. Right. Like there's, there's certain, like if you had enough samples of someone's voice on one of those big boards, you could get through most conversations without needing some custom sample.
Michael Sharkey:
Yeah. Just being aware of misinformation right now though is so important. Being aware that things you hear or even things you see now can be completely fake and fake faked really well as, as that Joe Rogan podcast example is, I mean, someone could easily post a, a voice clip somewhere on Twitter today where he said something he clearly didn't and then try and like, you know, cancel him for it or,
Chris Sharkey:
Yeah. And I was gonna say we're at a, we're at a unique sort of nexus of times in terms of that, where you've got the, the, the swiftness of the internet disseminating information along with the ability to fake that information. So, you know, what is it, it's hard to convince someone, it's harder to convince someone they've been lied to than actually lie to them. You know, like people won't, won't change their beliefs really quickly. So if you use this to spread fake information, it may just work even if people later found out it was fake. Well
Michael Sharkey:
That's the, the, the truth as well is like even now people are just catching up to some of the misinformation, uh, uh, the techniques that people use. There's a lot of SMS scams. I know I get SMS scammed all the time and sometimes I've almost fallen for them . Like, and, and these are really like early scams, yet there's this whole new potential, uh, risk coming coming to fruition now. And that is the downside of all this stuff being super accessible. So it's definitely a balancing act. It'll be interesting to see in security circles how this all plays out and what we can do to defend against it.
Chris Sharkey:
Yeah, exactly.
Michael Sharkey:
Alright, before we uh, wrap up today, , there's a few, uh, interesting pieces I wanted to talk through One, I've got up on the screen now, Chris, showing the traffic of the subreddit on Reddit. Oh yeah. Relationship advice Absolutely. Cratering when chat G B T is released. So I guess people are getting all their relationship advice now from chat G B T or at least it's taken market share away.
Chris Sharkey:
It's faster. Yeah. I suppose
Michael Sharkey:
I had a really funny conversation with, with someone we both work with at, at a lunch this week about when he goes on Tinder dates, he's now using chat G B T to, if he didn't like the girl to let her down gently just to Oh god. Because he was always too awkward to, to do that kind of thing. So now he's getting chat g v t to, you know, let the girl down in a, a really nice way where they still think he's a great guy. Yeah. I guess he
Chris Sharkey:
Could write like two or three page poems, just overwhelm her with information. It's like, look, this clearly wasn't gonna work out, but I've put in the effort to, to break up properly.
Michael Sharkey:
We've mentioned it before, but I think it's hilarious that, you know, people actually are doing this and then you think, well, you know, is, is the woman that he's dating replying with chat g p t as well? Like, is this truly now just the dating world is gonna get real weird. You
Chris Sharkey:
Understand? It's gonna be like AI bots versus ai bots. Like people have their sort of
Michael Sharkey:
Yeah, like your personal agent decides if you are going to date by talking to the other agent being like, are we compatible? Because that's, that's pretty much what's happening.
Chris Sharkey:
Yeah. Let let them get through the early stages of the relationship so you just kick in when it gets good. I mean,
Michael Sharkey:
Think how our kids might date, it's like we have found you a match like, you know, in this particular country region that perfectly aligns to you , like are you ready to marry? Like it could, I mean it could be like the ultimate dating match show that that'll be the first reality TV show and, and I'm putting my claim on that now. They'll use AI to match people together to see if it actually works in reality
Chris Sharkey:
Or just date the ai. Like I'm sure that'll happen at some point.
Michael Sharkey:
Yeah. Or run a simulation. Surely someone will do that.
Chris Sharkey:
Just simulate the entire marriage, divorce and whatever and don't bother with the relationship .
Michael Sharkey:
Alright, just so the last thing I want to cover is Germany are now considering banning chat G P T due to privacy concerns. Now let's, let's first talk through the band club as it is. We've got Russia, I'm not surprised China not surprised North Korea, definitely not surprised Cuba, Iran, Syria, and then Italy.
Chris Sharkey:
Yeah. Interest. Interesting. I mean I don't really get how like Jan banning chat GPTs one thing, but do you ban all of the models? Like how do you keep track?
Michael Sharkey:
I'm not sure. And like do you ban Amazon and all of their associated services now to spin these things up? Like
Chris Sharkey:
Yeah, and that's the thing, like you've gotta think Amazon at this point is probably bigger than most governments. Like at least some governments, you know, that's a big organisation to take on if you're gonna block access to their technology. I think
Michael Sharkey:
They've, they've said, you know, breaches privacy concerns around G GDPR where their information, it's unclear where that information's going or how they can, you know, make sure it's removed and deleted. So well, I mean
Chris Sharkey:
That's, that's actually fair enough.
Michael Sharkey:
Yeah, I just think maybe banning it reactionary is not the right way to do it and I'm sure people in those countries are, are, are finding ways of getting around it.
Chris Sharkey:
Yeah, it would be trivial at the moment to bypass it. Like just with a VPN or I, I just don't see how, um, you could do it. Maybe in China they can, I bet they can in China, but um, the other countries, yeah, it's interesting that they would do that.
Michael Sharkey:
Alright, so that's it from us this week. Thanks again for tuning in and all your support. If you'd like this episode or the podcast in general, consider leaving a comment or a review, giving it a like and subscribing, uh, wherever you get your podcasts. It's our pleasure to do this every week and we'll see you again soon.