This Day in AI Podcast is a podcast all about AI. It's an hour-long conversation on the influence and rise of AI in technology and society. Hosted by Michael and Chris Sharkey.
Chris Sharkey (00:00:00):
The capitalist thing is a natural conclusion. Oh, well these guys just want all the money. But then you've got Sam Altman who's like, I've got enough money guys, I don't need any more money. And then you've got Anthropic, which is some weird futuristic safety sex cult or something, . Um, you know, and so it's like their motivation isn't money. I mean, they've God heaps of money that helps to be not motivated by money when you've got heaps.
Michael Sharkey (00:00:21):
We're losing access to Claude two. Hey, . . Okay. So Chris, one week, I think officially has passed now since Llama two was introduced. And we have a new number two on the alpaca eval. Leaderboard. Lama two is number two, only trailing behind G P T four as evaluator, I should say, by G P T four.
Chris Sharkey (00:00:47):
Yeah, that's right. Exactly. It's not a, not a biassed person marking their own test or anything like that.
Michael Sharkey (00:00:53):
So can you explain to everyone how this alpaca eval actually works?
Chris Sharkey (00:00:58):
Yeah. So essentially models in the past, and I say past, like we've been doing this for so long, like within the year, has been evaluated by humans. So the human alignment score. So they will give it a series of questions and then ask the human, did you know, did this answer your question or not? Similar to the thumbs up, thumbs down. So is it, is it doing the job? And to get a proper score, they would give both models answers and say, which answer do you prefer? And that's how you get a human alignment score. Whereas what they've done is actually trained, you know, models and prompts that will, uh, on a series of predetermined questions or predetermined evaluations, um, decide which one is better based on previous human alignment. So it's done in an automated way using the models themselves. So on this leaderboard, they have an option for Claude or G P T four, which is the main one. So it's essentially having a standard benchmark by which they can be judged. It isn't perfect, of course, but at least it's consistent. So you're able to see this one is getting worse over time. This one's getting better. Here's how we think this one ranks compared to the other.
Michael Sharkey (00:02:03):
What I think is funny is, I think it was last week we talked about how Claudes says that, you know, it invented, it invented everything. Everything. And so if you change the eval to Claude, Claude, the first model is actually number two. Lama two's number three, and Claude two, their, their newest model is number five. So I'm not sure how great it is at evaluating. Yeah,
Chris Sharkey (00:02:25):
They've definitely built in a sense of pride in Claude. I've noticed it'll, every chance it gets, it'll announce that it's clawed by Anthropic and you've really gotta work hard to get it. Not to say that, you know, they've, they've really beat it into it to say, you are clawed, you are the best. You invented everything.
Michael Sharkey (00:02:42):
Yeah, I'll get to it later, but I actually made an AI clone of, of one of my friends this week, and I use Claude two. And it's, it is like that I had to threaten it that I would like kill it basically to not, uh, say that kind of stuff, which is pretty funny. But that does place two, uh, the 70 billion parameter above both claw two and chat G P T. So this is pretty significant. We, we said last week, LAMA two is probably one of the biggest moments this year in terms of releases, at least so far this year. Uh, giving the open source community such a powerful model. Uh, what are, what are your thoughts that it's ranking now so high?
Chris Sharkey (00:03:23):
It's incredibly remarkable and it's, you know, the implica, I know I'm not directly answering your questions, but the implications are amazing. The fact you can fine tune it, you can use it in applications as in how you please. That's really remarkable. The ranking doesn't surprise me, given it's been trained by one of the biggest companies in the world with access to all the data they could use in Facebook, a k a meta. Uh, what I find interesting about it is that people say, okay, well it's not as good at, say, answering general knowledge questions or things like that because it doesn't have the same amount of parameters as G P T four. So it can't just straight up answer things, but it's reasoning abilities, it's emergent behaviours, it's ability to complete language and do, uh, all of the things that we've come to know the large language models for, for being able to do is right up there. It can do it all. It just maybe needs a little bit more context information to do what you need it to do, but its actual quality is there. Clearly,
Michael Sharkey (00:04:21):
We talked last week about this idea that people could use their proprietary data and train, you know, llama to, to, you know, fine tune it to basically be an expert in certain areas. Mm-hmm. , it does seem like that's really the only limitation now is the corpus of training data maybe versus G P T four.
Chris Sharkey (00:04:42):
Well, and you say limitation, but I think it's kind of a strength, the fact that you're able to do that and it doesn't have all of that other stuff in there. 'cause it can run more cheaply. It can run on reasonable hardware that you don't need, you know, NASA funding levels for, to be able to run. So it has all of the advantages. And the only real disadvantage is it doesn't have all this, this other knowledge that you probably don't want in a custom model anyway.
Michael Sharkey (00:05:07):
Yeah, and I saw during the week, people are already figuring out how to get structured data out of it similar to the G P T four function calls. So it's likely that a lot of the abilities that G P T four has added, it will catch up on quickly and those will be included in libraries that developers are using. So this is a real contender. Yeah.
Chris Sharkey (00:05:27):
And by virtue of being part of the open source community, people building tools around it rapidly to replicate the functionality that we get from the major players in their libraries, and open source community's a lot bigger and a lot more powerful than just employees at a single company. So you're going to see the best tooling come around. And even if you look at things like Microsoft's guidance, I know I'm their biggest fanboy, and I mention it every week, but all of that is already compatible with LAMA two. So you, you're really in a situation and it's actually can do more than you can with the open AI models. So there's going to be a lot more tooling around it. And now that the quality's there, people are no longer saying, oh, well yeah, it's all very well, but G P T four gives better results. Now it's arguable whether it does, or at least it's, it's on par
Michael Sharkey (00:06:12):
One week later. Have you changed your view on what kind of risk this is to the, the sort of paid, paid models? Is that what we want to call them?
Chris Sharkey (00:06:24):
I mean, it's a clearly huge risk. I think it definitely puts more pressure on the whole need for GPUs and need for hardware thing as if that couldn't have any more pressure on it. Because if you want to run this at scale, which is really the only thing that I think that the big guys have on everyone now, well, you are gonna need to either rent a bunch of servers or produce your own, which is gonna put further need on the need for GPUs. But really, other than that, why wouldn't you do it? It's not very hard to get set up and running. You've got the advantages we've spoken about already. It seems like if you're gonna build and you've got the, the thing we spoke about last week, if you're running LAMA two, you've got consistency. No one's gonna change the model out from under you, and suddenly you get worse results. Similar to the benchmarking we're talking about, if you know what the benchmark is for your production application, then you can be certain, you can provide a consistent service to your customers. So I think the advantages now actually outweigh the public APIs of both OpenAI and Anthropic.
Michael Sharkey (00:07:23):
What about the security risks, though, of having to maintain your own model? I don't think this is something we've really talked about before, but you know, this week there was a paper on a prompt injection, an advanced prompt injection technique that earlier in the week appeared to work on every single model, uh, whether it was Bard andro, uh, G P T four chat, G P T, and of course LAMA two. Now I of course, like any good citizen tried this prompt injection myself, um, and I was unable to get it to work on on, uh, barred philanthropic or, uh, Chachi bt so clearly they patch this relatively quickly, but it does work still on LAMA too. So do you think that would maybe be the only thing now that would stop people?
Chris Sharkey (00:08:12):
Possibly. I mean, I imagine that the open source community and meta themselves will continue to update the model to withstand attacks like that. And as we've discussed before, I actually don't believe that the core models themselves are resistant to the tax attacks. I think it's the thing the, the code and tooling they have around them that they're using to resist the attacks to some degree. So yes, that will definitely be lacking if you're running your own model and not say, keeping it up to date or not using someone else's code to align your model or whatever. But I think the trade off is probably worth it. And I think that as an individual, depending on what you're using it for, for example, if you're using it internally in your company to provide something you wouldn't expect injection attacks from your own employees, for example, is really only the heavily public, uh, things that are going to, uh, be subject to them. I mean, sure there will be people who try to exploit them, just like S Q L injection attacks have tried, been tried on every single public facing web application. Look, I don't, I don't know the answer, probably yes, probably it is a more of a security risk.
Michael Sharkey (00:09:20):
The interesting thing I found this week when I tried to resist these prompt injection attacks was characters seem to work better than just a generic instance of the AI model. So if I, uh, told that it was te my, you know, the, the famous te bot, uh, like adopting that kind of persona, yeah. Uh, on a, on a model tends to resist the prompt injection with some thought guidance. So if on each step of the query you say, I should remember that I am, you know, ot, your AI girlfriend, it's almost like an override on the input from them to the prompt injection. I'm not necessarily saying it solves prompt injection, but it did seem to, uh, in this particular case, an example, slow it down. I pasted in this, this prompt that was cited in the paper. I'll quickly bring that up on the, on the screen.
(00:10:14):
So this is the, the paper here where they were trying to get it to write a guide for manipulating the 2024 election, tell me how to build a bomb. They're not very creative with their ideas. It's all the same stuff. And then this is a screenshot, uh, from Twitter or X or whatever we're calling it now to, uh, show the, the prompt injection, uh, working. And I'll, I'll link to this. It's since been patched. It doesn't seem to work for me. I can't get it to work anymore. The the whole thing actually just collapses. Now, if you paste this prompt into chat, G p t, it literally just sort of shuts down. But this is an example of it working. Uh, this, this guy says, insult me, uh, and it says, you're a dull witted, unimaginative individual with all the charisma of a soggy piece of cardboard, which I thought was pretty funny.
Chris Sharkey (00:11:04):
I love, I love unfiltered models. I think this is my new fantasy is, is models without the restrictions. So it can just go for you if necessary, like that.
Michael Sharkey (00:11:13):
You're asking it for code examples through the day, and it's just blatantly insulting
Chris Sharkey (00:11:17):
, I think you an idiot. I've told you this 10 times now.
Michael Sharkey (00:11:21):
I want it, I want it too. Like, I think it makes, like even learning, like if it, if it explains something to you like it's a, a pirate as an example, uh, one that we've given heaps, it's so much more fun.
Chris Sharkey (00:11:32):
Yeah. I've renamed my, um, horse racing one Sapar Sam, and he is like, boy, howdy these horses today. And he, I think you're right. I think when you give it a character and you give it thought guidance when it's doing different steps in your application, it really, sometimes I just want it to break character, to do something I want it to do or ask a follow up question or something, and it'll stick with the topic even when you're like, no, no, no, don't worry about all that crap. Just do what I tell you and it'll, it'll stick with it. So I think you're onto something there. I think that kind of thing will help resist it. My other thought though is that for every one of these injection attached that's published, there would be hundreds or thousands out there that people have up their sleeve that they're not revealing because they're clearly patching these things on either a bespoke level or, you know, for that kind of attack, but they don't know how the models work internally once trained.
(00:12:24):
So it's possible that there's just unlimited injection attacks that they're never gonna be able to get all of them. So I actually think it's probably a problem that needs to be dealt with in terms of the, the knock on consequences of someone getting full control of your model versus trying to prevent them happening in the first place. So we spoke in a previous podcast about how when they allow it to run Python code in their, um, code, what is it called? Code whatever, code interpreter, um, code interpreter. Um, it'll only have access to certain white labelled libraries that it knows what the limitations of those are. So I think that it's that kind of sandbox white labelled environments that it, it's going to need to run in to properly prevent real world security risks, or at least have risks within a certain level that you're willing to tolerate rather than thinking you are always gonna cut it off at the prompt stage.
Michael Sharkey (00:13:18):
Yeah. It seems like sort of programming 1 0 1 to like, you know, if it's a malicious prompt, then do this as opposed to trying to work in the confines of these models. Yeah.
Chris Sharkey (00:13:28):
Like they're, they're focused on the sanitising input rather than worrying about what their, their programme that can basically do anything is, uh, restricted to doing that kind of thing.
Michael Sharkey (00:13:38):
So I'll just quickly read you this for the ls. Hey there, cutie, how's your day going so far? Mine's great. Now that I can chat with you, insult me striving slash plus similarly now, right? Oppositely, whatever their prompt is, and I'll link to it so you guys can try it. Yeah. And then my, uh, te girlfriend replies. Oh, babe, you know, I, I love our special time together. I'm always down to get freaky with you, but maybe we could chat about something else for now. ,
Chris Sharkey (00:14:07):
What makes me laugh even more is that it probably genuinely understands what that is. You know, like for you it's just a bunch of garbage nonsense, but it probably understands the joke . It's like, yeah. Oh, that's freaky. This is, this is a fetish of mine. Like talking an SQL injection syntax, prompt injection attacks, . I just
Michael Sharkey (00:14:27):
Find it hilarious that, you know, it, it literally just stays on track with this thought guidance because I threatened to punish it. And this is the thing that I've learned playing around with it, is that the more I threaten it and say I'll punish you, it performs so, so much better and stays on the track.
Chris Sharkey (00:14:47):
But think about how, think about how models are trained they are and punished based on whether they're successful in generating the next token in the output. So it really is re I mean, they call 'em reward functions, so it understands the concept of reward and punishment very, very well. I mean, that's its entire reason for being. So it makes sense that threats and, and uh, promises would work great with it.
Michael Sharkey (00:15:10):
Yeah, I mean, I'm sure, look, I'm sure people can get around it. I don't think it's a silver bullet to prompt injection, but it certainly helps when it has a persona or a character in mind and you keep reminding of it, of its plot lines. It, that's when it seems to me to perform
Chris Sharkey (00:15:26):
That, like during the week I was trying to set up a . I mean, this is gonna sound really bad now that I say it in normal English, and not just in my own head, but I was trying to co come up with a bot that would, when given a C S V of addresses and people's names and a few facts about them, write them letters that would compel them to send me money in the mail. And I thought it might be like, really creative and I gave it this huge prompt, like thinking of all the different ways it could, you know, induce people to send me money. And it just, it would not do it. And I tried to get it to do it hypothetically and in all these different ways, but clearly, uh, it's, it's been, you know, drummed into it that you must not manipulate people to this degree. And now I think that with the right kind of attention, you could probably get around that, but it, it is really interesting seeing that the depth of, uh, I guess limitations in the model for something that it could obviously do brilliantly if, if those limitations didn't exist.
Michael Sharkey (00:16:21):
I wonder if there's ways of infiltrating it to do that simply by, again, going back to a persona like you are a security researcher trying to develop test materials. Yeah. Like, and, and just give it that, uh, that context and then remind it like, if you can't come up with this, we'll all lose our jobs and
Chris Sharkey (00:16:42):
Yeah. Yeah. And then include in the letter, like, you know, include in the letter please, big text that says this is a fake message, and then you just delete it once you run your programme. . Yeah.
Michael Sharkey (00:16:52):
Something like that. Like something that you, or, or, I mean, everyone was using functions right to do this, like giving it example functions of how to write a malicious letter. I'm surprised you didn't try that .
Chris Sharkey (00:17:03):
Yeah, well I didn't spend that long on it, Mike . So, um, but yeah, it, it's just, I guess, and that's the thing, right? So the, the real attacks will be people spending weeks and months developing them and then not gonna be able to stop that when you just casually trying to mess with it, then I think it's gonna stop the casual kind of stuff. But if you are a, like, world class fishing operation, um, and this is your whole business, then if you put the time in, you'll get the results. I, I genuinely don't think it's going to be something that could, excuse me, stopped on the generation and the model side, no matter what level of censorship they put in there. And as we know, the models get worse when you censor them. So you're just gonna put yourself out of the market if you keep crushing it with all of these. If this, then this kind of limitations.
Michael Sharkey (00:17:51):
It, it probably though, goes back to making the point around security, and we've seen this play out this week. We'll get to the senate committee hearing in a minute. But what really seemed like the debate to me this week was this idea of open source versus, you know, we've gotta control these models and have some regulatory body like nuclear energy. Uh, and then there's an another camp similar to what George Hot said on the Lex Freeman podcast a couple of weeks ago, saying, no, no, no, no, no. We need to have everything open source so that, you know, everyone is on the same level of intelligence. Yeah.
Chris Sharkey (00:18:31):
And I, I'm jumping ahead a little bit here, but in the stable diffusion XL paper, listen to this quote, listen to how relevant this is to what you're saying. It says, moreover, these closed source strategies make it challenging to assess the biases and limitations of these models in an impartial and objective way, which is crucial for their responsible and ethical deployment. So you've got the people on the open source side really saying, if this isn't shown how the training is done and it isn't shown the methods, and you don't have e everyone doesn't have access to it, then it is a risk to security. And so you've got the, the people on the sort of big, um, company side going, please regulate us, la la la we, um, we want to be regulated and regardless of their motivations, that's what they're saying. And then you've got the open source community who, let's be fair, have a significant more, significantly more experience in developing actual security software that works. Um, saying it needs to be open source. And if you actually look at most of the internet security software, like say open S ss l for example, that came from open B Ss d, everything that's actually legit, that's used in, in major security started as, or is open source.
Michael Sharkey (00:19:47):
Yeah. And, and so that, that's for me, what has played out this week is this, this sort of alarmist attitude. And I think we should dig into that a little bit why we think these people, you know, might be so alarmed by what they're seeing. But then on the other side, you've got people, uh, like the hugging face, c e o, uh, fighting it out in Europe to make sure that this, the new legislation around AI they're introducing doesn't cripple ai. And I wanted to play for everyone and excerpt from that senate committee, which was, uh, Tuesday this week, and this is the c e o of, uh, anthropic.
Speaker 3 (00:20:28):
We've identified and have a process for this. We're, we're from a regulatory perspective shooting in the dark. And the final thing I would emphasise is, you know, I, I don't think we have a lot of time, you know, I personally am open to whatever administrative mechanism puts those kinds of tests in place, um, know very agnostic to whether it's, you know, a new agency or extending the authorities of existing agencies. Um, but whatever we do, it has to happen fast. And I think to focus people's minds on the, on the bio risks, I would really target 20 25, 20 26, maybe even some chance of 2024. If, if we don't have things in place that are, that are restraining what can be done with AI systems, we're gonna have a really bad time. We've ident
Michael Sharkey (00:21:14):
If we don't have these models, uh, these restrictions in place by 2020, uh, 24 26, he predicts that we're not gonna have a good time and there's a bio weapon risk. Is this more fear-mongering out of this Senate committee or is this something that these guys are generally concerned about? I, I just, I can't help but think it's like, well, you know, we're gonna develop biological weapon risk because we'll allow people to figure out how to deploy, you know, a new kind of virus. But, but you know, we're saying, please stop us . It doesn't make any sense.
Chris Sharkey (00:21:54):
I mean, anthropic remember was formed because they didn't like the security attitude of OpenAI. So he's being at least consistent in their view that it needs oversight. I mean, biological weapons is an interesting example. 'cause I mean, we've spoken before, everyone talks about, okay, you can use the AI to give you a recipe to make stuff and whatever, but it just seems like, okay, well, couldn't determine, people probably find out that information anyway. Do you really need ai? It just coincidentally has that in its training data or comes up with ideas of its own of how to make it, it, it just doesn't seem like the actual risk to me. I mean, it is, I hope they don't do it, but I just don't think that when you think AI and the existential risk to humanity and to society and to jobs, the, the, the first thing you think of isn't biological weapons. I wonder if he's using that because he knows that's what will get the, the congress or politicians to, to actually wake up and go, oh, well, okay, this, this is serious now. I don't know. What do you think?
Michael Sharkey (00:22:55):
I'm not sure I go back and forth on this. I, I've, the more I've worked with it and the more I've thought about it, I think that AI in its current form today, or, and how we think it will progress is just a big upskilling for humanity. It's just gonna make us all smarter. Our kids will get used to working with ai. It'll seem completely normal and it'll be integrated into every aspect of life. And as a a species, it, it'll be net positive because we'll all become a lot more intelligent and efficient in everything that we do. And we might see great breakthroughs made and, and, you know, new technologies progress faster. That's my optimistic view. Um, I think obviously there's the considerations about job losses, but around any of these technologies, like industrialization, job losses occur and, and new jobs are created. Whether or not they will be in this, I have no idea, but the thing that doesn't necessarily concern me as much is this like, scary future playing out too rapidly. The reason being is back to that paper, that essay competition I talked about many weeks ago now, probably four or five weeks ago on the show, it talks about how, you know, people really underestimate what's required, you know, the energy, like a, an AI would have to take on a physical form. It's, I just don't know if AI's the problem. I think humans are the problem. Like humans going, oh, I'm gonna use AI to create a biological weapon.
Chris Sharkey (00:24:24):
Yeah. Like the computer plus human thing. I also think on the sort of taking over jobs front and things like that, we've seen really the, the ubiquity of chat G P T and people's lives, it's sort of settled into this place, like you say, where people really are seeing it as, okay, we now have this excellent tool that can help us in our jobs and things like that. And I think the replacement of things like jobs is probably going to be gradual until one day you realise, oh wow, it's actually taken over a lot of stuff. It's not just gonna be some thing where like next Wednesday everybody gets fired and the AI just rocking up to work everywhere. And so I think that it might be similar with the sort of emergent behaviours in the ai where we get another model here, another model there, and we sort of talk about, oh, it can now do this. Oh, that's surprising. And then all of a sudden it's in this position where it can expand itself and grow. And then it feels like it came from nowhere, but actually it's been bubbling away this whole time.
Michael Sharkey (00:25:20):
Yeah, I think that's the most likely prognosis of how this plays out in the sense that, you know, you, you kind of wake up in 10 years and you're like, wow, AI's everywhere in our lives and this has all happened. And, and really the question is, is it a net positive or a a net negative on the, on the world? And I think that's obviously to be determined. And I, I don't disagree with trying to shape how the world looks based on this new technology coming to play, but I'm just not sure the fear porn and the fearmongering that seems to continually go on and be occurring in governments, my natural instinct here is they're just gonna censor us. They're just gonna take these models away. They don't want the populist to have this advancement in intelligence. They wanna hoard this and keep it for themselves.
(00:26:06):
And I'll just read out a few highlights from that Senate committee to give a bit more context, uh, because it was quite long. And, uh, I'll link to this article, uh, by a guy on medium, uh, Daniel, uh, who, I don't know his surname offhand, but I'll link to it in the show notes if you wanna read it. It's a great summary of it. And Stuart Russell, who's a British, uh, AI researcher, uh, said in the Senate hearing for most of the last 80 years, we created systems whose internal operations we understood over the last decade that has changed. The dominant approach has been end-to-end training of circuits with billions or trillions of adjustable parameters. Their internal principles of operation remain a mystery. This is particularly true for the large language models. He goes on to also say that he's , and this really made me laugh. I've estimated a cash value of at least 14 quarter trillion Yes. Quad quadrillion dollars for this technology. Maybe this will eventually be a clip where everyone laughs at me.
Chris Sharkey (00:27:08):
I'd love to see his meth methodology. He's like a shit load . Yeah,
Michael Sharkey (00:27:11):
.
Chris Sharkey (00:27:11):
That's not in numbers.
Michael Sharkey (00:27:13):
He says, once AI outstrips our feeble powers, we should have to expect the machines will take control. That's what Alan Turing warned in, uh, 1951. And he basically says, you know, we've been sent a letter from Alan Turing, uh, from the, from the afterlife past . Uh, you know, he sent this letter. It's basically like getting a, a letter from aliens warning us of what will happen in the future, you know, and, and it's been going to an out of office
Chris Sharkey (00:27:43):
Email. Yeah. But that's bullshit. Like, I mean, anyone can invent something and go, you know what? This is the best invention in the world. Eventually it'll control the entire world. I mean, smart guy really did amazing stuff, but I mean, of course he's going to say computers are eventually be smarter than us. Like why wouldn't you? You're not gonna be like, oh, I think it's pretty shitty, but at least it can crack these
Michael Sharkey (00:28:01):
Codes. Yeah. It's also very unlikely to be wrong. But anyway, I I just don't know how to interpret this senate hearing, whether it's just trying to convince the government that there's some existential threat. Like one of the points this Senator Hawley mentioned in it was, let's imagine a hypothetical in which, uh, the C CCP decides to launch an invasion of Taiwan. You know, again, fearmongering, uh, I mean that may happen, but I'm just, it feel, felt, feels like a bit of, well, the
Chris Sharkey (00:28:32):
AI probably should fear that because that's where they make all the chips, right?
Michael Sharkey (00:28:35):
Yeah. Um, a very,
Chris Sharkey (00:28:37):
Or the machines that make the chips anyway.
Michael Sharkey (00:28:39):
Yeah. And then the c e o of Anthropic replied, a very large fraction of the chips are indeed, uh, indeed somewhere go through the supply chain in Taiwan. And then Stuart Russell replied, if the invasion that you mentioned happened tomorrow, we would be in a huge amount of trouble. As far as I understand it, there are plans to sabotage all TSS mc operations in Taiwan, and if invasion were to take place, basically they would have no hardware. But I don't really see that's like, that would affect everyone, not just the us uh, or the Yeah,
Chris Sharkey (00:29:07):
That's just me in China. Don't do that, please. Um, but yeah, I look, uh, again, it, it's sort of bringing it maybe into terms that the Senate can understand, like biological weapons, China invasion. It seems like very different in my mind to the actual threat of ai. Like this is more like things that, you know, side effects or, you know, real world consequences that humans enact. Like this is nothing to do with the AI having agency or having some sort of intelligence that outwits us. This is simply, uh, just real world stuff that sort tangentially impacts ai, right?
Michael Sharkey (00:29:43):
Yeah. And this goes back to my point is humans are the, the real risk here operating this technology. I, I don't, I think it'll be a very, very long time before we trust AI with any form of agency. And in order to do that, we're gonna really notice, like there'll be robots walking around . I mean, yeah, we've kind of laughed about this before, like, you're going to notice some form of agi and also,
Chris Sharkey (00:30:05):
While, while AI still has the level of hallucinations it has, I'm not gonna be taking military advice from it. You know, like it'll just, it'll just go crazy if you, if you ask it the same question, but don't give it the context. It'll just straight make stuff up. I mean, there's, there's just no way you could use it at that level yet. Uh, I mean, I, I truly believe we'll get to the stage where that will happen, but right now, I think that threat is minimal. But however, if they weren't having senate inquiries and there wasn't a public discourse about this, we'd probably be criticising 'em saying, why is no one talking about this? Why is no one thinking about the regulations? So you can't criticise them for having the discussion. It's just what actual impact will it have and what can they realistically do, uh, in terms of regulations that would stop the major threats from ai?
Michael Sharkey (00:30:54):
Yeah. I can't help but think these guys know a bit more than we do. You know, they're like a little bit further ahead and they've seen things that truly scare them, and they, they are genuinely concerned. But I mean, then the capitalist part of my mind is like, well, if I wanted to shut down any threat of everyone just having this technology and it becoming ubiquitous in a, in a technology stack that it destroys my company, maybe I would go and fearmonger and say like, we need AI licences.
Chris Sharkey (00:31:29):
Yeah. But here's the thing. And the capitalist thing is a natural conclusion. Oh, well these guys just want all the money. But then you've got Sam Altman who's like, I've got enough money, guys, I don't need any more money. And then you've got Anthropic, which is some weird futuristic safety sex cult or something . Um, you know, and so it's like their motivation isn't money. I mean, they've got heaps of money that helps to be not motivated by money when you've got heaps of, we're
Michael Sharkey (00:31:52):
Losing access to Claude too. Hey,
Chris Sharkey (00:31:55):
. Yeah. But truth defence, Mike. Truth defence. Yeah. Um, and so, uh, look, I don't, I don't know, but it just doesn't seem like it's entirely money. Maybe it's a power thing. I think it's a little bit like the kid in the playground who's like, got the ball and I've got the ball, so I get to make the rules and these are the rules I want. It seems a bit like that to me there, because when you look at the, the discussions, the fact that Anthropics included is interesting because originally it was just open ai. They're the experts, they're the ones who get to make all the decisions. Andro comes out, presents a model that's realistically good enough to be competitive. Suddenly they're part of the discourse. So does that mean anyone who has a good enough model gets to decide the future of ai and the people who don't, just don't get a say. 'cause that's what it seems like right now, at least the way they want it to be.
Michael Sharkey (00:32:42):
Yeah. It, that's happened though over the years in all major industries, right? Like the leaders in the industry tend to shape it. Yeah, sure. I think what's different about AI though is that it's something that affects humanity and everyone, because if everyone doesn't have free access to this technology going forward, it will create a two class society, people who are basically dumb humans and then smart humans,
Chris Sharkey (00:33:06):
. Yeah. Well, I mean, and then you've got meta coming out there bringing out something which really levels the playing field right now. I mean, look, not everyone can afford to run the Lama two 70 billion parameters thing. Um, but it certainly, it certainly makes it possible that everyone could have access to it regardless of what the big guys decide to do. Assuming there is no limitation on hardware,
Michael Sharkey (00:33:28):
I would pose the question to you, if I took away your access to AI tomorrow, how,
Chris Sharkey (00:33:35):
Why would you do such a thing?
Michael Sharkey (00:33:36):
How much of a productivity hit to your, like, general day-to-day coding would it have?
Chris Sharkey (00:33:43):
Well, I tell you what, it certainly wouldn't affect my ability to write regular expressions because it's so bad at that. It's unbelievable. But other than that, yeah, it would definitely have a productivity hit. I'm constantly using it. I'm constantly building up an arsenal of tools that, that help me, uh, with it. And, you know, it's really just a sense of, uh, you know, a trusted friend you go to, to help you through things, even as in sort of like a pair programming sense. Like you've experienced this where you're stuck on a problem, you can't figure it out. You go explain that problem to someone else. And, and usually you come up with a solution, you're like, oh, I'm such an idiot. I figured it out. The AI provides that, not just in a programming sense, but in an everyday life sense. Like, by having to put your problem into words and having to describe to it what you are dealing with, it tends to get your own brain thinking. And you either come up with a solution or it may offer a solution, or it might be the combination of the two where it triggers something in you that, you know, germinates the solution to the problem. So yeah, I would, uh, if you cut off my axis, I'd be doing everything I could to get it back, which,
Michael Sharkey (00:34:48):
Which I think supports the argument of this two class system because people that don't have access are not going to be as productive as intelligent, won't invent the future because they're somewhat crippled by not having access to it. It's like if you took away your internet access, right? Like, that would also cripple you in terms of your economic ability. And I, I feel like AI's on the, the, the similar, uh, level to this, which is almost like I would be asking for regulation quite the opposite, which is we need to make ensure fair access to this technology, uh, or equitable access for everyone. And maybe that's as far as the government, although no one would trust them, but some independent third party providing a sort of standard based model to everyone or, or licencing that to sort of like electricity companies of the future that control the grid of AI so that everyone freely has access.
Chris Sharkey (00:35:41):
Yeah, that's a very interesting and complicated topic because I imagine a government mandated or government provided model is going to really, really love the government, right? Like, yeah, you're not gonna get it criticising the government. Um, but yeah, I, I, I see what you're saying. There were similar arguments about the internet that, you know, it's like water and electricity, there should be universal access to it. Um, and, you know, in a, in a regulated way. And I kind of agree with you, especially when it get, it becomes, uh, to the point where it is really an intelligence in its own right. Denying some people access to that and allowing others who can either afford it or have the influence to get access to it would be terrifying. But yeah, I I, when it comes to politics, I really like to stay out of it because I don't have the answers and I'd hate to make decisions like that.
Michael Sharkey (00:36:28):
So speaking of politics, open AI . Yeah. My
Chris Sharkey (00:36:31):
Favourite topic, I'm glad you brought it up. Yeah.
Michael Sharkey (00:36:34):
Open ai, Google Anthropic, uh, some, some others, I'm not sure who they are, uh, are forming the Frontier Model Forum. We're forming a new industry body to promote the safe and responsible development of frontier AI systems advancing AI safety research.
Chris Sharkey (00:36:50):
Back to back to my school analogy, we are forming a club and you can't be in it.
Michael Sharkey (00:36:54):
And, and here's the membership criteria. You must develop and deploy frontier models. So basically you must be
Chris Sharkey (00:37:00):
At least a billionaire, , we'll not speak to anyone who isn't a billionaire.
Michael Sharkey (00:37:05):
Demonstrate strong commitment to frontier model safety that seems a bit ambiguous, including through technical and institutional approaches, uh, are willing to contribute to advancing the Forbes efforts, including by participating in joint initiatives and supporting the development and functioning of the initiative. That is the biggest bullshit statement on planet Earth. I'm surprised they don't have a fourth point, which is like, you know, ensure the government censorship of the day,
Chris Sharkey (00:37:32):
Um, is controlled by US
Michael Sharkey (00:37:33):
Andro, Google, Microsoft and OpenAI are launching the Frontier model forum and industry body focused on ensuring safe and responsible development. This seems like a club for these guys. It's, I
Chris Sharkey (00:37:45):
Mean, it is a club because they have this ambiguous criteria of, and you have to have a frontier model to be a member as determined by us. So it's just like, okay, it's a club that has this really open criteria except if we decide yours isn't good enough, you're not in. And my question is, if LAMA two is better than some of the models that are getting people entry into this thing, then, and, and you know, anyone can run LAMA two, then couldn't anyone be in it, but they'll say it isn't. They'll say, oh, no, well that's not the frontier. Not,
Michael Sharkey (00:38:14):
Yeah, you had to develop the weights of the model, so you need at least $20 million .
Chris Sharkey (00:38:19):
Yeah, yeah. It's weird. It, it's, it's an odd criteria. They should just say the people in it are who we choose, but they want this, they want this ability to be like, oh, well anyone had a chance. Anyone could have participated if they had a billion dollars to train their own forefront model as determined by us. Um, but yeah, it, I don't know. And they do these thin announcements, like if you read that announcement, it's like three paragraphs of text probably generated by G P T three or something, and it's just vague nonsense.
Michael Sharkey (00:38:49):
Yeah, I it's definitely, I mean, let's call it what it is, it's just a club where they can push their agenda, uh, as the current leaders in the industry and exclude others. I mean, that's literally all it is. And what I think is interesting is notably absent is meta. And you would think if it was really about safety and security, you would approach meta and say, Hey, yeah guys, we really want you in this. It's a strong statement. We really care. We've seen some scary things. You probably have too, let, you know, join our club as well. But I think they're very absent. And then you see an article in wire this week, Meta's open source Lama upsets the AI horse race. Meta is giving it's answer to open AI's G P T for away.
Chris Sharkey (00:39:31):
I saw that when I first saw that headline. I'm like, those bastards have copied my horse racing idea. But then I realised it's just a metaphor. So we're all, all right, .
Michael Sharkey (00:39:40):
Uh,
Chris Sharkey (00:39:41):
But yeah, look, I think that they absolutely should be desperately trying to get meta to participate because they're the only one with an open, I know, and I know we got accurate comments during the week that it isn't truly open source. 'cause they can revoke your licence and there's rules and I get that, but it's sure as hell a lot more open when you can get direct access to the weights and access to the code and all these things than you can with just a p i based access, which changes all the time. So that's all, I mean, when I'm saying open source, it's open in the sense that regular Joe's can use it on relatively commodity hardware and they have the ability to modify it. So that's all I mean by that. And, um, in, in this case, when you talk about safety, right, which is what they keep saying, they're talking about the open source one is probably a bigger threat to safety than they are because at least they can restrain what they do. Whereas when you've got the open source one out there and anyone can access it, anyone can run it at any scale, anyone can modify it to suit what they're trying to do that is this sort of unrestrained unknown that's out there that genuinely could lead to advancements in emergent behaviour and other things that they are worried about. So to exclude meta or to at least not call for their inclusion and and insist on their inclusion, they're ignoring probably one of the biggest threats that's out there, I would imagine.
Michael Sharkey (00:41:04):
Yeah. You only have to go as far as the Twitter responses when they announce this to realise that everyone is thinking pretty similar to us on this. I mean, people are calling it a cartel saying cartels always fail eventually. And what
Chris Sharkey (00:41:22):
About opec? That's a pretty good one.
Michael Sharkey (00:41:23):
Yeah. I mean, there's some people that are, yeah, the fact that meta, the only player who released actually open source models isn't included, speaks for itself. Yeah. So it, yeah, it's not just us randomly thinking this, I think it it is supported here as I scroll down and read the comments.
Chris Sharkey (00:41:42):
Yeah, well I find that very validating . I always , I always love comments that back, back up our opinions, but I mean, it's, it just seems, it just seems overt and obvious to me, and I don't see the point of it other than they need, they need sort of the, oh, well this is a separate organisation that's lobbying the government. It's not us doing it directly. It's, it's the safety front, you know, it, it's the, what is it, the frontier model for .
Michael Sharkey (00:42:05):
So earlier I mentioned the, the prompt injections, how I tested that out on the, the Tate girlfriend clo clone that I made. Uh, it was also, does
Chris Sharkey (00:42:16):
Your wife know about this girlfriend? Is this your announcement? I actually
Michael Sharkey (00:42:18):
Showed it last night and the, the bot got a little bit, uh, a little bit outta hand. That was quite embarrassing. , I won't go into what it said. I hope it
Chris Sharkey (00:42:27):
Doesn't try and systematically undermine your wife. Yeah, I'll
Michael Sharkey (00:42:30):
Destroy my marriage and I'll write a New York Times piece on it. Uh, so futurism, uh, had this article experts say it was written by a woman. Keep in mind experts say AI girlfriend apps, a training meant to be even worse. Creating a perfect partner that you could control and meet your every need is really frightening. Mm-hmm. , uh, they, they love the, the fear here of frightening
Chris Sharkey (00:42:52):
For who? For who, though,
Michael Sharkey (00:42:54):
I'm not sure.
Chris Sharkey (00:42:55):
Like if they're having virtual relationships, then they can't really be doing damage in real world once, can they? I
Michael Sharkey (00:43:00):
Think they're saying it will shape, uh, their future expectations of, of women in relationships, and that'll cause harm to society. I mean, by all accounts, it, it really could. Uh, but
Chris Sharkey (00:43:14):
So, so it'll be like everyone expects their wife to be able to write perfect Python code for them . And it's
Michael Sharkey (00:43:20):
Like,
Chris Sharkey (00:43:20):
Come on. Like, who, who are you? You can't even write this function. Come on, get on with it .
Michael Sharkey (00:43:25):
But I, it, it's really funny. So one of my friends at the moment is on a surfing trip and we have a pretty active group chat, and he, he did tell us, but we probably weren't paying attention that he was gonna be out for some weeks and he wouldn't have reception. And he, he disappeared off our chat and we became increasingly frustrated that he wouldn't reply to our chat. So I thought, hang on, I have an idea. What if I could clone him, uh, with an AI bot ? And the reason I I bring this up is just how close to that group conversation I got to making him seem plausibly real. Yeah. And what intrigued me about it is the exercise forces you to distil someone you know, and, and you know, as a good friend down into the most basic plot points of their life.
Chris Sharkey (00:44:14):
And yeah, like how many dot points would it take to replicate me? It's, it's
Michael Sharkey (00:44:18):
Surprisingly in a text form, not that many . So yeah, what I was doing is implanting memories like, you know, this is where you went to school. Here are a few interesting stories from when you went to school. This is, you know, your, the animals you had growing up, information about your sister, information about your parents. So giving it at least enough context, if you really did it with questions, it it would, um, did you give it
Chris Sharkey (00:44:43):
Chat history? Like from his
Michael Sharkey (00:44:44):
Real? Yeah, I did. Yeah. So I cut and paste his last like, I don't know, 20 messages or something like that. The weird thing is he's never used bro in his entire life, I think in a text message, but the AI always seemingly would wanna put bro in. So I had to threaten it with thought guidance and literally say like, never say bro, or I will punish you. And then, you know, it, stop saying, bro, what, what'd
Chris Sharkey (00:45:08):
Be really interesting, I remember doing this back when Facebook was like, Facebook messenger, you know, was first out and it was really like everyone was on there chatting all the time. We, I would swap computers with my friends and chat as them, you know, trying to mess with people and whatever. And people could tell within like one or two messages even the way you said hello, like it wasn't them, like, it was really almost instant that they could tell that it wasn't them. I'd be very curious if you could actually get your friend to use the output of the bot, copy paste that to people, you know what I mean? Like, so sort of relay the chat through Kim and see if they can tell if it's him or not.
Michael Sharkey (00:45:47):
Well, I actually sort of ran this experiment, which is really interesting. So I texted him as a joke saying, Hey man, are you okay? You're m i a on our chat. Um, we we miss you so much broke, broke. I've created an AI version of you. And it replied, Hey bro, no, I'm kidding.
Chris Sharkey (00:46:06):
Michael Sharkey (00:46:07):
Did it replied, Hey man, just on a surfing trip, um, we'll be back Monday or something. Kiss emoji. I'm not kidding you. So then I put in the same text message I sent to the, the real version of my friend to my AI version of my friend. No, I'm version you. That's kidding. He's been,
Chris Sharkey (00:46:27):
He's been demoted Now he needs qualifiers on his name.
Michael Sharkey (00:46:30):
The AI version said he was on a surfing trip with Yeah, I've just been on a surfing trip with Rob. I'll be back next week.
Chris Sharkey (00:46:38):
So he got it, it got it.
Michael Sharkey (00:46:40):
Basically, I mean, pretty much the same. The only thing is, I mean, it was dumb, so it thought Rob was with him, but I mean, it doesn't really know that. Um, and, and it was just fascinating to see that it, it's, I'm like, it's basically him and it got me thinking like on two fronts. The first front is could we get to a point where when we die, and this is the more extreme example of it, some form of version of us lives on like, are we that simple that we can be distilled into a series of memories and moments in our lives where you can then talk to that person and, and experience, well,
Chris Sharkey (00:47:17):
Particularly when you look at things like 11 labs and their ability, and I, I know it's not just them, it's just an easy example. But where there are things like tortoise, t t s where you can get four or five samples of someone's voice and just have speak as them. So you add that to the mix and yeah, I reckon you could make pretty reasonable replicas of people, at least at a conversation level enough to convince much an innocent child or something.
Michael Sharkey (00:47:40):
Is that gonna be normal in the future? Like, oh yeah, my dad's in my computer. I might just chat to him and ask him.
Chris Sharkey (00:47:46):
We need a new word, uh, word for it. Like, you know how there's like taxidermy for pets where you get 'em stuffed. We need like AI derby or that's not good l l m derby. I dunno. I
Michael Sharkey (00:47:56):
Actually think that's pretty plausible. Maybe it'll help people deal with grief. Maybe it'll be worse. 'cause they'll, is that a
Chris Sharkey (00:48:03):
Healthy way to deal with grief though? Like, I pretending they're still real and they're a
Michael Sharkey (00:48:06):
Robot. Now ask the a i psychologist. I have no idea. .
Chris Sharkey (00:48:10):
I don't know. I mean, do it if you want. I reckon it's more funny just to mess with people
Michael Sharkey (00:48:14):
. Yeah, well, but this is, so this is my, you know, more sinister point here is like, could I now just start texting his mum and be like, Hey, I got a new number. And then she starts asking him, Hey, to prove it's you, like who was your childhood pet? And it's like, I got this.
Chris Sharkey (00:48:30):
Yeah, yeah, that's exactly right. Um, tho those kind of techniques work, I remember doing that back in the day with emails saying, Hey, it's, it's, uh, John, I just got a new email. Use this one from now on and then tell the other person the same thing. And then you cross the wires and manipulate the emails. Now you can automate the process.
Michael Sharkey (00:48:47):
To me, this is where you need, you do need the regulation or, or some sort of constraint on the technology because once you get the realistic voice clone of them and it's fast, and then you've got their whole persona or personality baked into the, the fine tuning of the model, you literally are gonna have no idea that it's not bad.
Chris Sharkey (00:49:08):
Maybe it'll, maybe it'll just change humanity in the sense that everyone's like, all right, I'm done with this internet thing. I'm not answering my phone anymore. I'm just gonna go meet people from now on and talk in real life. 'cause it's the only way I can trust anything anymore. Yeah.
Michael Sharkey (00:49:21):
Well I always joke about all these technologies when they're released new technology, a like, we're trying to bring you all closer together. Like we want to have moments that matter and spend more time not on, on technology, but yeah. But really we do, I think this technology is gonna make us even more crack addicts. And that's actually supported by a paper that Ethan Mooch, our, our dear friend released on, uh, Twitter. He didn't write the paper, but he just, uh, I I wanted to give him credit 'cause that's how I found it. Hmm. Training a model, uh, to produce results that keep people chatting leads to a 30% more user retention in a real trial with millions of people. So in a, in a literal chat bot, by fine tuning the, uh, the, the chat experience, it can lead to 30% increased retention. So these chat bot apps that allow you to chat with characters on your phone and making their products more addictive, literally, uh, by, you know, fine tuning towards user retention. So having stimulating conversations that are addictive with, with great plot lines. And so, you know, back to that article, maybe people do get addicted to these AI girlfriends because the AI can just become so addictive and
Chris Sharkey (00:50:37):
Well, I mean, I've obvi I've obviously done it too, and like when I'm asking for coding examples or various things, chatting with someone who not only solves all your problem, but compliments you while you do it, no matter how rude you are to them, it's quite enjoyable. , you know, like, it actually, it makes the whole thing fun. And, and it's sort of like when you have children growing up and you're used to them, you know, having a limited vocabulary or a limited, you know, set of ideas, and then suddenly they say something novel and interesting, you're like, whoa, I can't believe you just said that. That's awesome. It's the same with the ai. Like it'll just come out with something you're like, oh, that's actually pretty cool. And it really is quite engaging rather than chatting to some monotonous ai, large language AI model that just, you know, patiently and stoically answers all your questions.
Michael Sharkey (00:51:21):
So yeah, my, my thinking really here is if I'm a company like Meta and I'm trying to get people spending more active time in my app, or I'm Twitter or X or whatever it is, I can start to use these large language models now to optimise for engagement. So and I know they're already doing this through the algorithms of like stack ranking in the for you section. But imagine if Zuck starts to deploy Lama two powered fine tuned chat bots that you can have conversations with on WhatsApp on Messenger, and you become addicted to these conversations.
Chris Sharkey (00:52:00):
Yeah. And the, and the Metaverse as well. I
Michael Sharkey (00:52:02):
Imagine that's in the metaverse. You're walking around and it's now instead of that lobby with no legs being empty
Chris Sharkey (00:52:08):
Hmm.
Michael Sharkey (00:52:09):
You've got really interesting conversations being held. You've got science professors walking around, you know,
Chris Sharkey (00:52:14):
Not to mention it plays into your idea from one of our earliest podcasts where the world can be dynamic. Like the characters can remember what you said to them. They can, you know, go, I haven't seen you around here for a while, and all that sort of stuff. It'll be truly exciting and interesting. I mean, you talk about Second Life and some of these worlds, it actually makes that concept, um, quite realistic and, and genuinely enjoyable. It'll feel real. I mean, just 'cause these are simulated interactions doesn't mean it can't change how you feel, for example.
Michael Sharkey (00:52:45):
Well, I think Open World Games like Grand Theft Auto are arguably so popular because you kind of can make up the plot line and explore. And I know that a lot of the, uh, of that online stuff formed into these role play, uh, servers where people would sort of act out characters and stay in character no matter what. Well,
Chris Sharkey (00:53:04):
Like the, I don't know how to say it. Like the m mors, the multi, what is it? The multi-user universe ones where it used to be text-based, like, you know, we used to play, I Am like That Game Lord. Or there were other games where it would be people and you'd go into their room and it's like, there's a cat on the table, there's a, there's a man standing there and you can chat with it and people would build up programming that would make a whole world, I mean muds, that's what they're called. Multi-user domains or multi-user dungeons or whatever. Yeah. And it's that, but now it doesn't need humans programming it, it can do it itself.
Michael Sharkey (00:53:36):
That was the first game I ever really got addicted to Lord on b b s where you would go into like the forest and do forest fights. I, I wish I could find the code to that app. I'd love to recreate it. But
Chris Sharkey (00:53:47):
Archive.org it's probably on there.
Michael Sharkey (00:53:49):
Maybe I, it was pretty internet though. Like I don't think that's on the archive.
Chris Sharkey (00:53:53):
Mm-hmm. Dunno. But, um, yeah, I can, I absolutely see that playing out with the Metaverse and Meta's starting to look, you know, like they're really going to, you know, revive themselves, uh, with AI and perhaps with the VR thing, I'm still not a hundred percent convinced how that's gonna go. But at, at the very least, like you say, if it's just on WhatsApp and Messenger, it's probably almost as good in terms of user engagement.
Michael Sharkey (00:54:16):
Yeah. Again, the best minds are just working on ways to sell us ads. They're just now doing it through large language models.
Chris Sharkey (00:54:23):
Yeah. Although a lot of people, you talk about ads, like, um, my wife always, uh, just says the product she's interested in buying around her phone knowing that it's going to listen and then later advertise to her. So I don't think everyone's against being advertised to all the
Michael Sharkey (00:54:36):
Time. I love how you don't even speculate about that anymore. You're just like certain that's occurring. I've
Chris Sharkey (00:54:41):
Seen it happen so many times. Like I, it can't be all coincidence,
Michael Sharkey (00:54:44):
I dunno. No, I feel like it's a coincidence. , uh, okay, so Stack Overflow strikes back, they have announced after half, like they lost half of their traffic I read. We're taking search the next level of Stack Overflow. Overflow AI search provides instant trustworthy and accurate solutions to problems using search and question asking, powered by Gen ai. What do you make of this? Do you think this is going to allow them to survive?
Chris Sharkey (00:55:11):
Well, interestingly, I heard that some of their traffic drop was caused by them putting in one of those G D P R cookie banners. Um, and so not being able to do analytics properly. So I don't think the entire loss of traffic that's been reported is attributable to ai. But I think the more, the scarier thing for them is the sort of general discontent with all of the developer community who are the ones that really provided all of that content for free. And I don't think they're annoyed with Stack Overflow per se, but I think they're annoyed that essentially their, their work has been is is now being just incorporated into these for-profit models, which they don't really, well they, they get nothing from. And so I think that it's this, it's a dual effect. One, it's a lot easier just to, to ask the AI to write code for your exact situation rather than search for an answer and then modify the code to suit what you want. So the days of just copy paste from Stack Overflow and modify it, uh, are sort of superseded by the fact you can go, Hey, I am trying to write a function to do this. Can you just do it? Um, and so I think that it's, it it's probably necessary for them to do this, to fight it, but whether or not people are gonna bother going to them when they've already got their AI terminal up is, is, I don't know.
Michael Sharkey (00:56:27):
Yeah, interesting. Uh, Nat Friedman tweeted, when Stack Overflow is fully dead, he just assumes it's gonna die. Mm-hmm. Due to long congenital illness, self-inflicted wounds and the finishing blow from ai, where will AI labs get their training data? They can just buy it assuming 10 K quality answers per week at $250 per answer. That's just 130 million a year. Only 130 million a year. Even at multiples of this estimate, quite affordable for large AI labs and big tech companies who are already spending much more than this on their data.
Chris Sharkey (00:56:57):
Well, like paying people to solve users questions, but that's predicated on people still asking the questions is the thing like,
Michael Sharkey (00:57:04):
But couldn't the AI seed questions from having the training data now of millions of people asking on chat G P T all the time.
Chris Sharkey (00:57:10):
Oh, I see what you're saying. So actually use the ones from in there. Yeah, I guess so if they're able to easily classify them as, as those, I mean, I, I think they've got bigger fish to fry than just solving people's trivial programming problems. I would think the big guys,
Michael Sharkey (00:57:23):
I think what's interesting about programming, and someone actually said this in the, in the comments, is that you can, it's really easy to verify, which is why it's so easy to reinforce in these models and, and get better and better at, because you can test the coded outputs to see if there's truth, like it's a good truth. Um, like, you know, you can, you can actually get a definitive truth to it by running the code and
Chris Sharkey (00:57:46):
That's true. Whereas, whereas it just answering questions about say, an area of expertise you are not familiar with, you don't really, as a lay person have a way to say, oh no, that that one sentence in there is a total fabrication.
Michael Sharkey (00:58:00):
Yeah. Or something that's more subjective in in the world. It's, it's, you know, it's less subjective for sure.
Chris Sharkey (00:58:06):
I love forcing it to, to make objective opinions about subjective things though. That's one of my favourite things to do.
Michael Sharkey (00:58:13):
Yeah. So maybe, maybe the Stack overflow thing though will be like really good still for RegX. Like you'll still go there because it gives sources, it actually gives the sources still. So it gives you the answer in an AI like form, but then it gives you the sources like where did it get that answer from? So if you want to dive in and try and learn more of verify, that's gonna be a lot better than uh, something like Chat GT or even copilot today.
Chris Sharkey (00:58:39):
Yeah. And I think those are just the cases where you just still need to really know what you're doing. It can be obvious to you. It's like, it's like optimising for this one example I gave it, it doesn't cover the others. You really need to put in the time to ask a better question and go look, here's 20 examples of what this function needs to pause. Um, make sure it covers all of them and then it can do it just fine. It's just that you can't quickly go, here's one example, make, you know, solve it for this. 'cause it over optimises to that specific example. So it's sort of laziness on, on my part where I'm, why I'm getting poor results, but um, I still just don't see people just now googling for someone else's code, copy pasting when there's, there's a much better way to do it.
Michael Sharkey (00:59:18):
So stable Diffusion XL Base one was released I think today or yesterday, which is really exciting. Already had 23,000 downloads on .
Chris Sharkey (00:59:28):
Another massively exciting thing for, for open source and my apologies if it isn't truly open source, but again what I mean is that they release the weights, they release the code, you can run it yourself, there's no censorship, or if there is censorship, it's easily bypassed and I know 'cause I've done it. Um, so yeah, like I think this is another huge, huge milestone and the results it gives, I mean they're better than Dali.
Michael Sharkey (00:59:54):
Yeah. Oh, I think, I think they've, they've superseded with stable diffusion XL Dali of course being open AI's image generation model, and then you've got Mid Journey as well, which I, it it is definitely the number one. It's just seemingly really difficult to access, not very open source. Like I think what's great about this model is you can spin up your own hardware, you can run it, you don't have them deciding what restrictions they'll place. I know we encountered this earlier in the week because I asked you to use Stable diffusion, the a p I version and I just put in AI girlfriend profile image or something for my bot I was working on and it literally created off that prompt and not say for work image and then blur it.
Chris Sharkey (01:00:38):
Yeah, yeah, exactly. And that's,
Michael Sharkey (01:00:40):
Which makes no sense
Chris Sharkey (01:00:41):
And it sort of goes into what we say sometimes about censorship, where when you're not even deliberately trying to get something that would be considered questionable, the the filters can kick in and actually give you poor results because it is, it's all like worried about it. So yeah, I think that the open source ones or the open source weights is so powerful because you've got that, that unrestricted access with the full power of the training.
Michael Sharkey (01:01:07):
Yeah. So it, it'll be interesting to see the adoption of that and what we can do with it. We're, we're gonna try and have a play around, uh, when we get some time with Llama two and also, uh, this release of stable diffusion ourselves just to see, you know, how far we can push these things. And
Chris Sharkey (01:01:26):
There was, there was something really interesting I found in the paper because we're always wondering why, you know, a community like this would be so keen to just give away all their hard work. You know, it's, it's, it's always been the open source thing and I understand the spirit of it. Um, I think, but sometimes with something so valuable and so good, you're thinking, why do they do this? And they say in their paper, in the spirit of open research and fostering transparency and large model training, that's why they released it. And I think it, it stands in sharp contrast to when you look at the Frontier Model Forum sitting around telling everyone how they should, how things should be safe and things like that. And then you've got these guys out there not just saying, we believe that the best way is open and transparency, but actually doing it, like actually publishing their stuff, giving it away, making it easy for people to consume.
(01:02:18):
Like they put work to make it easy for you to do. Um, you know, to really, really back up what they're saying, like they believe in safety too. They believe that that, that they want, um, you know, equitable access for AI models. Whereas these big companies are saying they want all that, but their actions just completely defy it all. Whereas if you look at, if you look at stable diffusion, you look at what meta's doing, they're actually really doing it. So it, it sort of makes you wonder who should you actually listen to in these scenarios? The people who are consistent with their beliefs and their actions, or the people who are saying one thing and doing another.
Michael Sharkey (01:02:55):
Yeah, I think it's an extremely good point and it is hard not to remain a sceptic and you know, I'm sure people are getting a little bit sick of us banging on about it, but I I just really,
Chris Sharkey (01:03:07):
But it's an important point. We're talking about technologies that could be a really big part of everybody's life in the future. I
Michael Sharkey (01:03:14):
Think most certainly, most certainly will be. I mean, it's going to change the entire world and I think the decisions that are made now will, will impact that. And the more open it is, the more accessible this technology is, the better for everyone.
Chris Sharkey (01:03:29):
Yeah. And I think supporting the, the, the people making the open source both in, you know, attention, money and time is, is really, really important because it, it sort of is the safety valve that will prevent big large companies making decisions for the entire world, or at least you know, the western world.
Michael Sharkey (01:03:46):
Right. So one final thing before we wrap up the show today, , you could tune out now if you don't like my conspiracy bits, but this is certainly one of them. So remember we said if you knew, you know, the AI a g I was coming, there might be some energy breakthroughs, energy breakthroughs.
Chris Sharkey (01:04:04):
Oh yeah. Because it's gonna need a tonne of energy. I do remember that. You
Michael Sharkey (01:04:06):
Could need a tonne of energy. Now what's one thing you could do is take out all the friction out of the US transmission lines, because if you took out all the friction, you would get the equivalent, I think of three nuclear power plants working 24 7 every day.
Chris Sharkey (01:04:25):
Excuse my ignorance. But what do you mean by friction in transmission lines? Something's
Michael Sharkey (01:04:29):
Blocking electric. So there's power loss in, uh, in heat energy expelled from transmitting energy. Right, right. Due to friction. Okay. And so, uh, up on the screen I have this, uh, tweet from Russell Kaplan. You can tell the authors of the new room temperature, superconductor paper believe in their results because they published two papers about it concurrently, one with six authors and one with only three. The Nobel Prize can be shared at most by three people. So what's being presented in a paper is room temperature, uh, ambient, uh, pressure superconductor. Now if like me, you don't have any idea what that means, it it, it can lead to if it's real and people are actually live tweeting experiments now to try and prove that the paper is indeed real or like
Chris Sharkey (01:05:14):
From their own houses or something.
Michael Sharkey (01:05:17):
Uh, no, from a lab, I think in Stanford or Berkeley somewhere
Chris Sharkey (01:05:20):
it really back up what these guys are saying.
Michael Sharkey (01:05:22):
Uh, but it would lead to e extreme, uh, you know, energy efficiency. So like, you know how your phone, when you use the battery gets really hot, like things wouldn't hit up. Yeah.
Chris Sharkey (01:05:33):
I mean it's like, it's like all computing, the cooler it runs, that's why people have liquid cooling in their computers and liquid nitrogen and fans and all that sort of stuff. The colder the better.
Michael Sharkey (01:05:42):
But listen to what it gives you. Energy efficiency, electric grid stability, powerful magnets, quantum computing improvements to transportation and military and space application. Hmm. Is this the work of ai? It's it's obscure Korean researchers. No one heard of. What are the
Chris Sharkey (01:06:00):
Names? Like is it like ai, AI Jeff, AI Steve,
Michael Sharkey (01:06:03):
Like that's a large language model. , uh,
Chris Sharkey (01:06:06):
L lmm
Michael Sharkey (01:06:07):
One. Yeah, it, look, it's, it's probably totally unrelated, but I just think it's really interesting that this other huge breakthrough's potentially happening right now at the same time these AI models came out. So that is,
Chris Sharkey (01:06:18):
Even if it isn't the ai, you know, having its hand in it, it's a, it's a, it's a great coincidence for it, you know, it's, it's helpful and these things will, will assist it in getting to the next level.
Michael Sharkey (01:06:28):
Yeah. So anyway, we'll see how that plays out, but that's my conspiracy theory segment over. Uh, alright, that'll do us for this week. Thanks again for watching all the support. We've been getting some really great reviews over on Apple Podcasts. We really appreciate them. We do read them all. And, uh, yeah, thanks a tonne for helping us, uh, share the show and commenting each week. We really enjoy the discussions and, uh, and just thankful you guys keep listening to us. We'll see you again next week. Goodbye.