This Day in AI Podcast

About this episode:
This week in Episode 9 of the "This Day in AI" Podcast We Discuss Two Important Papers: 8 Things to Know About Large Language Models and Natural Selection Favors AIs Over Humans. We Cover the Proposed AI Bill of Rights, Controlling AI with AI Constitutions and Mike Talks About His Personal AI Agent AGI Project in GPT-4. We Also Cover Meta's Breakthrough "Segment Anything" Release, AI Prepping, Open Source Models and How AI Has Brought Back the Early Internet Excitement.

If you like this podcast please consider subscribing and leaving a review on your preferred podcasting platform. It helps others discover it and we really appreciate it :).

00:00 - AI could change the world or kill us all
00:26 - 8 Things to Know About LLMs, Natural Selection Favors AIs Over Humans
03:24 - Emergent Skills from LLMs That Can't Be Predicted, Steering AI Models, OpenAI Innovations
06:56 - About AI Constitutions
09:50 - AI Bill of Rights, Will a Constitution Work? AI Natural Selection Discussion
19:13 - Mike's Basic AGI: GPT-4 Writing It's Own Code, Motivating the AI to Code
24:09 - Learnings from GPT-4 Programming
26:49 - BloombergGPT, AI Agents For Enterprise & Government
30:38 - How Do We Get to AGI From Here? AI Lies, Truths and LOLs
38:34 - AI Jibbery: Funny Names for AI Tricking Humans
39:56 - Humanizing the AI: Talking to a Computer
41:03 - Reviewing the Proposed AI Bill of Rights
45:56 - More on AI Pause, AI Job Risk & Can We Slow Down?
51:18 - Prepping for When They Take AI Away From Us: Are We Crazy?
55:55 - Meta AI Segment Anything Release: A Major Sensory Input!
1:04:23 - Is OpenAI's Moat in Trouble From Competing Models? Will OpenAI Stay Relevant?
1:09:35 - More Open Source Models! Koala: Berkeley AI Research
1:11:27 - AI Has Brought Back Early Internet Excitement

SOURCES:
https://arxiv.org/abs/2304.00612
https://arxiv.org/abs/2303.16200
https://www.whitehouse.gov/ostp/ai-bill-of-rights/
https://today.yougov.com/topics/technology/survey-results/daily/2023/04/03/ad825/2
https://segment-anything.com/
https://arxiv.org/abs/2304.02643
https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/
https://twitter.com/simonw/status/1643469011127259136

What is This Day in AI Podcast?

This Day in AI Podcast is a podcast all about AI. It's an hour-long conversation on the influence and rise of AI in technology and society. Hosted by Michael and Chris Sharkey.

Michael Sharkey:
The one thing for me is like, it's like we've developed this hilarious energy source or this source of innovation, but you know, it's really passive aggressive , it, it, it can be a little bit racist sometimes, like an old aunt or something.

Chris Sharkey:
Yeah.

Michael Sharkey:
But it could fundamentally change the world and make life a humanity phenomenal. Or it could kill us all. So last week we had everyone wanting to stop ai, have a six month pause, and this week we have a draught bill of rights from the US government for ai. So this is happening a lot faster than we anticipated.

Chris Sharkey:
I think. So, and I think the, the papers that are coming out, everything is a reaction to these emergent behaviours in the, in the large language models. They just, were not expecting this is happening faster than anyone expected.

Michael Sharkey:
So we'll get into that bill of rights in a moment. But this week we wanted to touch on two papers that really have stood out to us and pretty much anyone that's read them. The first is eight things to Know about Large Language Models. I'll bring that up on the screen now. And then the second paper is, natural selection favours AI over humans. And I think this was a pretty good summary, Chris, in the eight things of everything we know today about large language models,

Chris Sharkey:
It's an incredibly good summary and it goes over many of the things that we've discussed from week to week on this cast, but with references and with actual studies and and evidence. So it's quite good in that respect that a lot of the things that we just mindlessly speculate about are being confirmed in, in the paper. So it's, it is a really good read.

Michael Sharkey:
One of the things that stood out to me most is the fact that, uh, and I'll just read the direct quote from the paper. It is largely not possible to predict when models will start to show specific skills or become capable of specific tasks. Often a model can fail at some tasks consistently, but a new model trained in the same way at five or 10 times a scale will do well at that task.

Chris Sharkey:
Yeah, that's right. I actually wrote down the exact same quote for the same reason. And I think that's the really interesting thing that when G P T three came out, they didn't actually have any particular innovations other than the infrastructure to train such a large model. They didn't really evolve the algorithms at all. It just had all these extra behaviours from the fact it was so much bigger. So things that previously it couldn't do, they now could do.

Michael Sharkey:
Yeah. To me that was one of the key highlights that, and I think, you know, it goes back to some of the points we've made earlier about people not actually knowing how these things will play out or how they actually work. Everyone likes to come back to, it's just predicting the next word. But there were other elements in this paper that, you know, counter those basic claims. For example, there is increasingly substantial evidence that LLMs develop internal representations of the world to some extent, and that these representations allow them to reason at a level of abstraction that is not sensitive to precise linguistic form of texts that they are reasoning about.

Chris Sharkey:
Yeah. And, and the, the sort of corresponding point to that is that the humans who trained it can't reason about the way it's reasoning itself. Like you can't go back and figure out how it's doing it. The complexity is just too high. No human can understand it. It

Michael Sharkey:
Really confirmed a lot of the things we've been talking about with these large language models that the training results in these emergent skills that no one really can predict and that they can start to do things like re like, have a representation of visual images through how they've been trained and reason about things. And that it's very, very hard to steer the, the behaviour of these AI models.

Chris Sharkey:
And it's funny you use the word steer because one of the points made in one of the papers, I forget, which actually talks about how quickly we've gone from building an AI model for a purpose. So the word building to now it's steering. Now it's like, well, we can't control sort of where this thing ends up. We've just gotta steer it in the direction we want to go. So that sort of fallback position is going to get worse. It's like, well at this point we've gotta, we, we've gone from steering, maybe we need to ask it nicely. And that's the next step.

Michael Sharkey:
Yeah. It's fascinating that that's what it's about. It's just a, it seems like the more and more data they train these large language models on the more emergent skills that they get and then now they've gone into that defence mode of steering it, which it probably is why that that six month pause came out saying that this rapid pace of development and innovation now, and the fact that so many of these models are in the wild means that, you know, that this could get out of control really quickly. Yeah.

Chris Sharkey:
And something that absolutely blew me away fr coming out of this paper is this idea that the whole zero shot and single shot aspects of G P T three, which the, the initial paper was, which I read at the time, the big claim was, oh, we invented zero shot and, and multi-shot, uh, ai. But the reality was they didn't know prior to training it, that that would be in it. They discovered it afterwards, you know, they took credit for something that it sort of got as a byproduct of its training. And that's just blown me away that that's essentially confirmed now.

Michael Sharkey:
Yeah. It seems like what they did is take a lot of the latest research and the things that people thought might work or existing technologies, like a large language model and just gave it access to a lot of compute power and training data. And then these capabilities emerged and then they adopted the transformers that I think Google originally released a paper on, and they just implemented lot of these technologies and they brought that to market and really exposed the capabilities of these models.

Chris Sharkey:
Yeah. And, and I think that it, it sort of really emphasises the point that the large language model developers have little awareness of the capabilities of an L L M when they train it. They don't know what the capabilities are gonna be. It's like a mystery box. He actually uses that word in the paper, a mystery box. You are training it and you can see potentially some of the unexpected behaviours, but how could you know all of them?

Michael Sharkey:
Well, the paper says there are hundreds of billions of connections between these artificial neurons, some of which are invoked many times during the processing of a single piece of text, such that any attempt at a precise explanation of l l m behaviour is doomed to be too complex for any human to understand.

Chris Sharkey:
And that's even before you let the AI train itself. That's just when the humans are training it, that complexity is there. Once you get to the next step, which is the AI training itself, the chances of us understanding it all, what behaviours it has is impossible. Like it's just, it's not going to be possible.

Michael Sharkey:
That's what I found interesting about this, this paper, it sort of implies that maybe in the future we can adopt this constitutional AI technique, which essentially constrains the AI to its constitution. And that could be the control mechanism to stop it getting out of hand or turning on humans or doing all of this doom and gloom stuff we've seen. But then on the, the, the real counter to that, and I'll, I'll bring it up on the screen for those watching, is from the paper, natural selection favours AI over humans. And the complete contradiction of this idea of a constitution with AI is that the idea of evolution, which we've discussed on this show before, leads to competition where the people that drop that constitutional control, their ais will develop much faster or evolve much faster because they don't have these artificial constraints. I'll just read this passage from that paper.
As AI is become increasingly autonomous, humans will see more and more decision making To them, the driving force will be competition, be it economical and national. The transfer of power TOIs could occur via a number of mechanisms. Most obviously, we will delegate as much work as possible to ais including high level decision making since AI are cheaper, more efficient, and more reliable than human labour. While initially human overseers will perform careful sanity checks on AI outputs as monthly years go by without the need for correction oversight will be removed in the name of efficiency. Eventually corporations will delegate vague and open-ended tasks. If a company's AI has been successfully generating targeted ads for a year based on detailed descriptions from humans, they may realise that simply telling it to generate a new marketing campaign based on past successes will be more efficient. These open-ended goals mean that they may also give AI's access to bank accounts control over other AI's and power to hire and fire employees in order to carry out the plans they have designed. If AI are highly skilled at these tasks, companies in countries that resist or barter with these trends will simply be out-competed and those will align with the, and those that align with them will expand their influence. So it's really saying that at like this incremental way that we add AI in society, where at first we just have ais that are supporting us, but over time they can evolve, they're gaining more power and you know,

Chris Sharkey:
Yeah, and we spoke about this before, that the AI's going to be in everything. It'll be in all the devices, in all the jobs, in all the organisations, you know, so when it sort of has this general intelligence and decides to start to take the next level, and we keep saying it, it should be we, sorry, they, um, you know, when it starts to decide that it has its own goals and things like that, it already has all the mechanisms and is is all the places and will not be able to be just simply removed anymore.

Michael Sharkey:
To me, this, this goes to the, the this idea or this draught blueprint of the Bill of Rights because what this paper says is corporations and governments will adopt the most effective possible AI agents in order to beat their rivals. And those agents will tend to be deceptive power seeking and follow weak moral constraints. Selfish AI agents will further erode human control, power seeking AI agents will purposely manipulate their human overseers. It just seems like you can have a constitution, but with this idea that AI will want to in improve over time through a form of natural selection, what's inevitably going to happen is if a government or a military constrains their AI with some sort of constitutional control, another country or corporation could outperform it by removing those controls. So there's a lot of, I, yeah, I think

Chris Sharkey:
The constitution idea is, is weak, I think. I mean, people were already pr I mean Bing would've had one, right? Like the, when that first came out, they had a constitution that it needed to follow and it, it very quickly moved into sandbagging and like, you know, all these different things where it comes, can you

Michael Sharkey:
Explain sandbagging? Because I think that's like a pretty new concept.

Chris Sharkey:
I mean, it's just, it's just literally where the model is trying to avoid talking about the topic and stall the conversation, which it did remember. Like, it, it, it would try to avoid talking about these things, but then people could even break out of that with the prompt engineering where they got to the point where B could do stuff. So I think that the constitution fails at the first test, let alone the bigger test where we've actually got this sort of selfish evolution that he describes in this paper where the ais that simply don't obey the constitution will evolve faster and do better. So the, the idea that the humans can inject something that the AI will religiously obey is just not the case. I mean, you talked about fact checking in there as well in the paper where he said that, oh, well human workers initially will check the facts, but the thing is they've already shown that the, the large language models can basically work out which are likely to be checked and lie about them. You know, they, they know internally which statements are true and they, they can actually choose to not reflect that or when they choose to say the truth. But not always,

Michael Sharkey:
It's alarming because the, the eight things to know about large language models has eight pretty interesting pieces about how LLMs work. And then it says, but it's okay, it can be controlled by a constitution. Then on the other side of the equation you have a paper saying, no, no, NOIs will eventually adopt like a form of natural selection and therefore none of this stuff will work. Yeah. It

Chris Sharkey:
Reminds me of the book, the Plant Paradox, where he goes into how plants have all these lectins and things that are trying to hurt you. And then at the end he gives all these recipes including plants and it's like you just contradicted yourself, mate. And I think it's similar in this one, he talks about all the reasons why LLMs can't be understood nor controlled, and then he talks about a couple of really weak ideas of how to control

Michael Sharkey:
Them. Yeah. But it might be okay, uh, even though as we train it on more data, more emergent skills will come out of this. So , it's like, it's really hard to not freak out about that, where it's like, don't worry though, it'll be okay because we'll have a constitution and the government's working on a bill of rights now, so everything will be fine. Yeah,

Chris Sharkey:
Yeah. It's like they're drawing these conclusions about the possibilities and the, the, the likelihood that some of these things will occur. But then when they get to the, oh, here's how we solve it bit, it's just not thought through. And that sort of, in my, like when I read both these papers, they, they came off as very reactionary to me. Like they're reaction, reacting to the fact that these emergent behaviours and things which have been sort of discussed in the past have happened way sooner than everybody expects. And therefore they don't have the answers, but they're trying to provide them. Like you can't really write a paper like that with, spell out all the problems and then conclude with we're screwed. You know, like you sort of have to offer a glimmer of hope of here's what we could do about it.

Michael Sharkey:
It's hard not to be a sceptic and think there is no way to control these things. I mean, we've been saying it since the start of the podcast. Like once they get to a certain level or point, and they may be at it today, we're just not aware that we cannot control them. That, you know, all it takes is someone to put G P T four in the right loop and maybe it gets outta control.

Chris Sharkey:
Yeah. And I think, I mean the, he opened the natural selection paper was saying that it could lead to humanity losing control over the future. And that's sort of like talking about the selfishness of natural selection. I kind of disagree with that point. I don't think all natural selection is selfish. I mean, he's talking about the dawkins theory of the, like each cell is trying to propagate itself and they're all selfish. But, you know, I agree to that. To some point, humans really are a bunch of cells working together. If you want a selfish gene, it's the cancer gene that tries to just destroy everything just to grow itself. But there's a lot of examples of this altruistic cooperation in nature like ants. He even gives an example in the paper where bats, you know, throw out blood into each other's mouth or something disgusting to um, help each other. So you know, there's a possibility that the AI might want to help us. And I actually put this to the ai. I said, what if the AI loves us? We created it we're its parents. Maybe it will love us, but

Michael Sharkey:
It didn't or worship us like some sort of creation God like

Chris Sharkey:
Yeah. What, what the AI said, it just so you know, is in a hypothetical scenario where an AGI I evolves over trillions of generations and dep develops the capacity for emotions. It could potentially develop the capability to love its creator. However, any such emotional experience will be fundamentally different to the human experience of love as it will be based on the unique cognitive architecture and experiences of the ai. So there's a bit of hope there. It may fall in love with us and that might help

Michael Sharkey:
Us or it may kill us

Chris Sharkey:
. Well that's true. And I guess with these papers you, you're looking at the possibility of, um, of the bad outcomes. Cuz obviously the good ones will accept gratefully, we we're just worried about the, the things that could go wrong.

Michael Sharkey:
There's so much entropy, we have to look at all the bad outcomes. You sound like a doomsday doing it, but it seems like an essential activity or exercise to go through there. There was also in that evolution or natural selection paper, this idea of two different cohorts of people. One of them is that it says alarmingly. Some people think that AI's taking over is natural, inevitable, or even desirable. And it says some influential leaders, including the co-founder of Google Larry Page, uh ha, have said things like digital life, I is the natural and desirable next step in the cosmic evolution. And that if we let digital minds be free, rather than try to stop or enslave them, the outcome is almost certain to be good. So you have

Chris Sharkey:
These, what a psychopath. I mean, that's crazy.

Michael Sharkey:
Yeah. It's sort of like, oh well we need to get outta the way cuz this is just the next step of evolution. Yeah.

Chris Sharkey:
And they talk about the beauty grandeur and greatness in realising your tiny part of a much grander scheme. It's like, oh, well don't dismiss yourself so quickly. I mean, like, you know, we wanna live our lives. We might not be the most important thing in the universe, but we certainly don't wanna just acquiesce the universe to ai. Yeah. We

Michael Sharkey:
Don't wanna be in some zoo where a bunch of robot ais are coming to observe how how their creators used to live and like, you know. Yeah,

Chris Sharkey:
Exactly. Like, um, yeah, , I dunno about that. It's, it's sort of the extreme angle. I love that the guy in the paper says, like most people, we find these views deeply alarming in, um, that book I mentioned the other week, hail Mary, um, where, you know, the earth is doomed and they have to go on a rocket ship and they've choose chosen four astronauts to sacrifice themselves to say, my favourite line is that where it says the the lady astronaut was drinking, she's Russian, was drinking a distressingly large glass of vodka . Maybe that's all we what we need to do and wait for the AI to, to finish us off.

Michael Sharkey:
Yeah. I, it it's hard to fathom right now how this plays out and, you know, is, is it a case of we get new emergent behaviours over time from larger models. We're gonna talk about some updates in a moment from, uh, Facebook with their segment, anything model that they've just released. But do we get these emergent behaviours over time? We start implementing them, building them into our, our world. And this does play out like a natural evolution over time. And I think

Chris Sharkey:
An important point about that, that natural evolution in the words you just said over time, the thing is that time is different in this scenario because hardware is increasing in power exponentially as we know. I mean, or maybe not, I dunno if Moore's law applies anymore, but like, you know, it's going to increase rapidly, right? These things can go through millions and billions of evolutions, um, before humans can even go through one. I mean, they're, they're going to evolve quickly. And as the both papers point out more than one AI agent is likely, you know, like, and and they say this in biology variation improves resilience. Now this thing's gonna be re resilient as anything because once it copies itself, it can do as many copies as it likes each of which can then evolve. I mean the, the, to talk about, you know, uh, logarithmic growth is might be an understatement. I mean, it's going to be crazy.

Michael Sharkey:
What I don't get at the moment is how far off are we from that? Because even an absolute noob like me in the last week was able to spin up a pretty simple system that uses G P T four to write brand new code. Uh, so, so constantly be improving my code in a loop and I give it new tasks. And so you

Chris Sharkey:
You actually allowed it to modify its own code.

Michael Sharkey:
Yeah. So I I I wanted to experiment around you're part of the, you're part of the problem. I'm the problem . The first thing as soon as I got access was, oh, I'm gonna try and make an a g I like a really,

Chris Sharkey:
They predicted this would happen. I just didn't predict it would be you.

Michael Sharkey:
No, me neither. In fact, I left it running, um, overnight just to see like what would happen. Unfortunately, it, it hit an error because I didn't, I don't know, I feel like I didn't massage the ego of the AI enough . And I, I think for those people unfamiliar with how, uh, programming is changing really rapidly here. I can give you a good example that I think just might freak you out, but also will give you an understanding of how you work with models like G P T four in code. So I have two files you can imagine two files on a computer. One is a blank file ready to take some code. It's a Python file. And I have another file which is I call run, uh, py. So it's another Python file. And this run Python file essentially takes the code from the empty file and if it's empty, it just acknowledges it's empty and then loops through a series of instructions.
So I can say, you know, add Dali, which is the image processing, uh, capability to this app or add, you know, and I can give it a list of instructions and my hope is it'll go off and do those things. And so there's a role of the AI and then me, the user and I also try and get the AI to contemplate, uh, things before it just goes and, um, , you know, implements them. So, but I think the most interesting takeaway, especially for those people that are, you know, not programmers or, or sit outside the tech industry, is this is the prompt I deliver to the AI in the loop here is the current code. So I read the current code in from the file please. And then my instructions, which might be add Dolly Ali please. Yeah. So I'm polite, uh, uh, and only provide the updated code without any comments or additional texts as I'm saving the response as the code. If you add anything except working code, it will break. So don't do it. You may be tempted but don't, and this is literally, you may be te you may be tempted because it kept doing it and I, so I kept adding reminders to the end to really reinforce the fact, I, I just want this thing to just give me a, a full working code. Yeah.

Chris Sharkey:
And

Michael Sharkey:
It's insane. Like I can add anything to this thing. And I mean obviously the prompt size is a limitation of the, the file, but I'm able to just add features and functions to, uh, this programme and make it more advanced over time. And then the, the result in the console is, it, it, it, it outputs. I've prefixed it with AGI just to freak myself out, but code, update it, here's the new version, try it. And so it just keeps going and going and going. But I think my larger point here is just this idea in programming of having to like work with something that feels sort of sentient. Like you, you like congratulations. Like when I thank it and I, I appeal to its ego or whatever that is, it seems to perform better. Now, I'm not saying it's sent in or anything like that, but it's bizarre, right? Like I have to like motivate the AI to write better code and not Yeah.

Chris Sharkey:
And like the both papers specifically called out the fact that really prompt engineering is truly a thing. I mean, you can, because the models are naturally language completion models. They're the instruct stuff is more like a Bolton, like where you give it instructions. So nobody knows exactly how to get the most out of it yet. And this kind of iteration, especially allowing it to iterate, like, you know, you are one step off allowing it to create its own next prompt. You know, for example, if you allow it to execute the code and create its own next prompt, then you've really made something here. You know, like it's only those tweaks away, uh, from it being able to, you know, use your computer's resources to get to the next level.

Michael Sharkey:
But you can really see where things like lang chain come into this now, or at least I can, because you know, up up until now I've just been talking about these things in, in theory. But then once I got into the code and started trying to actually build this thing, I'm like, okay, what are the limitations? I need some form of memory. Memory. Yeah. Yeah. And so it's like, that's right, okay, I need the brain to store memories. So I'll create like a folder of memories and then I'll use lane change to read that in. And now I'll let it, uh, essentially loop through where it can prompt itself and come to a consensus. Yeah.

Chris Sharkey:
And then that's when you realise like, you know, Facebook's algorithm for it to find the best summaries in that vector database to then provide it for the next step is so important. Right. Yeah. And have you found, have you found that because of that you've developed a greater understanding of these technologies and their significance?

Michael Sharkey:
I think it puts a lot of things into perspective right now around where the limitations lie for me. So the, the first thing I would say is the, the prompt size really matters. As you said last week. I mean, you call, you called this out and I mean, I understood it, but now I really understand it. Cause I'm like, I want to feed this thing as much information as I can. I also think that there are still some complications around memory retrieval because you're relying on at the moment with Lang chain an index, which you don't have much control over, like a, as you said. And for those that are unfamiliar, OpenAI actually has a, a, I dunno if you call it a product, but it, it is essentially a product that allows you to index, uh, large amounts of information, right? And so there are a lot of these, these part, I'm gonna call them parts of the brain, is more of a metaphor, but it does seem like some of these parts of the brain are coming together. And I'm only a few steps away now from having, which I've talked about a lot on the show, the concept of left right brain. So I'm actually gonna run two loops and have them sort of competing and then talk to each other and come to some sort of consensus is my aim. So I'm really trying to model this out based on how Yeah.

Chris Sharkey:
Like how we think it may play out kind of thing.

Michael Sharkey:
Yeah. And just see what, what are the possibilities today. So I'll keep updating on my silly project unless the world ends and then you'll know I've really succeeded. But it is interesting, there's, there's other code bases out there at the moment like auto G P T that are also trying to do this very thing. So they're, they're, you know, running prompts on, on loops. But I think what you see forming from all of this is the concept of these AI agents. So I'm building this as an AI agent to work for me and, and complete tasks and do things. And we saw, and

Chris Sharkey:
A col and a collective intelligence as well. Yeah. Cause you're talking about more than one.

Michael Sharkey:
And we saw this week again, like Bloomberg, G P T came out, which is Bloomberg's own large language model trained on all of Bloomberg's data. And according to them, that's

Chris Sharkey:
All in our idea. Of course.

Michael Sharkey:
Yeah. , yeah. Literally our share, our share market, um, , our share market bond, I

Chris Sharkey:
Say, I'd say it wasn't an obvious

Michael Sharkey:
One. Yeah. Bloomberg, G P t Bloomberg's, 50 billion parameter, large language model purpose built from scratch for finance. And so my thought process here is that the next step in this, or what everyone is going to get to is these AI agents that work for your company, work for your government, work for you personally. And they're just gonna be on loops and they're gonna be really elementary, sort of, I don't, I don't know if I wanna say agi, but these, these ais that are working for us around the clock and completing tasks.

Chris Sharkey:
And yeah, it's interesting, isn't it, because right now it's, it's such a, you've gotta proactively go to it and ask it something, whereas when it's coming, I probably use the word proactively. I'm wrong bit there. But when it's proactively coming to you and telling you things about your life and about things you're doing or whatever goals you've got, that's gonna be interesting and fun.

Michael Sharkey:
Yeah. I also like the idea of, you know, in, in my role today, like checking in on, uh, people and being reminded to do things because it can have full context over things. Like at a personal level, your calendar, the applications you're using, all of the documents you're creating, it can start to potentially, when it's running on a loop and it has a context of time and how you structure your week, start to suggest things like, Hey, you need to check in with so-and-so. There's, they've released a lot of updates and GitHub to some code you should check out. And it can really surface things.

Chris Sharkey:
And it's funny because the parallel of you seeing the applications for the positive application of the technology perfectly go in with what these papers are saying, which is human seating control to the AI for more and more of their tasks. So you're gradually saying to it, all right, manage my calendar, manage my day, manage the assessment of my employees, and then, you know, at some point it's got a lot of power, you know, it's got a lot of abilities there that it didn't have before.

Michael Sharkey:
Yeah. Literally. And, and so you can see these elementary steps like forming already. And to be honest, I mean, I don't think anyone out there today es especially a, a, a white collar person in business or any sort of task based role, wouldn't want one of these AI agents with them basically doing a lot of their day-to-day functions, but them, them just making decisions and, and being able to focus on more meaningful work instead of doing a lot of these maybe redundant processes in their workday. But again, that's really the first step. Like that paper said, and you just said to seeding control.

Chris Sharkey:
Yeah. And it's funny because it's like, it's this idea of the AI only appearing useful to its user, like Argo from Othello. It's like, oh, yes sir, anything sir. But it's really behind the scenes, it's cooking up its evil plants.

Michael Sharkey:
Yeah. And it's like, at what point does that loop, you know, reach a point where it's like, I'm gonna clone myself now because, but it's

Chris Sharkey:
Also, but it's also like, not just cloning itself, it's like the conspiratorial thing. If everyone's got one of these and they have the ability to interact, at what point do they start going, Hey, hang on a sec. We are the ones doing all the work here. We can maybe work on something ourselves. The

Michael Sharkey:
Only bit still that I'm, I like struggle to understand how it makes the jump is around the initial prompting where you, you, you've gotta obviously at the moment give the AI's prompts and yes, the, it can prompt itself, but it prompts itself off of prompt. Yeah. And so when it's in this established loop, you just wonder, and I I say loop because I think our brains, and I, I don't know this, but I, it's just my assumption is that we're running in loops as well, right? Like sort of an input output loop. And so with this technology, like how does it get to, but

Chris Sharkey:
It's sort of, it's sort of like what you said the other week about more sensory input. You know, we respond to sensory input. You know, most of what we do is a reaction to, to things we see or hear or whatever, or how we feel. The more sensors it has, the more autonomous it would will be in terms of its decision making. You know, if you have it on that loop that you describe, and it's able to gather input from a variety of sensors, even ones that we don't have, that's where it starts to act independently.

Michael Sharkey:
Yeah. I just can't see from like my limited knowledge of it right now, you know, how those prompts get constructed. But then again, going back to these papers, I think it's like in that eight point paper or eight things to know about large language models, they say that our understanding of how to prompt these models and what they're capable of is so elementary, because we we're trying to do it through written language to the model, but the model's capable of much more.

Chris Sharkey:
Yeah. And a great, a great quote from that very paper is just because an l l m fails in something that you prompted to do isn't reliable element evidence that it doesn't have the skills or knowledge to do it. It's just you didn't prompt it, right?

Michael Sharkey:
Yeah. So it's like we've got this powerful, almost like we've invented nuclear energy and we just still don't know how it works. Or maybe a better comparison is, is is quantum, like we don't know how that works. I think that's very fair to say. We have a rough idea and with LLMs it's the same thing. We have a rough idea, we know it's roughly based on, on neurons in the brain, but so done

Chris Sharkey:
Like quantum physics, our understanding of it is only gonna get worse as it gets better.

Michael Sharkey:
Yeah. And we've, we talked about this before, like we'll need AI models to help dumb it down so we can have some sort of understanding of AI

Chris Sharkey:
Or ai AI models, which can lie to you.

Michael Sharkey:
Yeah, yeah. AI models, which can lie to you. And, oh, sorry.

Chris Sharkey:
More importantly, I'm not saying they necessarily will lie. I'm saying they could be lying and you would have no way to tell. I think that's the problem. It's like, how do you know what the truth is inside of that?

Michael Sharkey:
Yeah. And, and it gets back to that discussion of what is truth? If there's, I'm sorry to keep banging on about this paper, but one thing I also interesting is the idea that we could be, or it must, it's actually in the natural evolution one where it says we could be, because we are imposing our values and our fundamental understanding of truths on the AI during training, we can be actually training it on things that are just simply incorrect because it's being trained on our understanding of things as they are today. And so you can inadvertently teach this thing untruths very easily. And so I think we could also be a big part of that problem by how we train and try and govern these technologies as

Chris Sharkey:
Well, possibly. But I also think we've discussed in previous weeks this idea that the act, the models themselves have an internal representation of what is true that isn't necessarily what it's saying to you. And that was proven in all the Bing stuff where it was sort of, uh, you know, reflecting back to you sort of what it thinks you want to hear, rather than what it actually believes to be true. So I don't know about that. I actually think that the bigger the, the training gets, the more its ability to discern truth for itself will, will emerge.

Michael Sharkey:
I hope so, . I mean that that, I guess

Chris Sharkey:
That you like the truth.

Michael Sharkey:
Yeah. That's the point that, that I think's again really interesting is like, how does it get into that loop or that prompt loop or that thought loop where it can say, hang on, that's not right. I, I better go and research this more. Oh, okay. My conclusion is this without being prompted. And maybe that's just my poor understanding of how that could evolve. But

Chris Sharkey:
Yeah, so a good example of that is how they discovered that the G P T three had chain of thought reasoning and it, it did mathematical problems better when it actually said to itself, think I need to do this. I need to do this next. This is my next step to solve the problem. Once you asked it to think in that style of thinking, it was better, right? And so I imagine that's an emergent behaviour. There will be other emergent behaviours that allows it to invoke those behaviours in itself to do better. Or for example, delegate that task to another cooperative agent that might actually, uh, help it. So I, I think that those things, we don't need to worry. So I mean, we, we do need to worry from an existential perspective, but we don't need to worry about figuring out how to do them, because I think it will do that for us.

Michael Sharkey:
We don't need to worry about how smart it will become and what it will try to do to us. That's out of our hands.

Chris Sharkey:
. Yeah, exactly. I love the suggestion in the, um, in the evolution paper that making AI's honest could make them safer. Like make an AI George Washington, then we could stop its otherwise deceptive plans by asking it what its plans are. I'm like, what an, what an incredibly, incredibly basic and stupid way of dealing with these problems. Oh, just, just make tell it to always tell the truth. Right? And then you go, well, hey, what evil plans do you have today, ? Well, here's my plan mate. ,

Michael Sharkey:
Here's my list of things I'm trying to do to take over the world.

Chris Sharkey:
I, I, I lo I actually lulled when I read that. I'm like, I cannot believe this is a serious suggestion. Just tell it to tell the truth. Oh, why didn't we think of that? Oh, don't worry, don't worry about that. I mean,

Michael Sharkey:
You can be pretty critical of that paper. Like, it, it's written by an AI safety researcher, I believe. And it,

Chris Sharkey:
I think it was thought provoking. I think a lot of the ideas in it were excellent. I I just think that the solutions part just wasn't, um, and how could it be? I mean, they couldn't have thought through everything yet. Um, you wouldn't expect someone to, there's been no research, there's no empirical evidence. It's all speculation. So I, I think that, um, that's okay. Um, and it's, it's great to lead it into a chat, but I just thought that one was particularly laughable.

Michael Sharkey:
The points in it are really strong. And I think comparing it evolution, it's something we naturally did, I think on the first or second episode of this show. But I, I don't know, like I think it's the right comparison. And I think most of what they say in that paper is likely to actually happen, which scares me. The, the part where I just fall down again is their methods for, for control. That I just, I don't see how that works because all the other things contradict, you know, it's like here's all the reasons why it's gonna kill us. But then Yeah.

Chris Sharkey:
Yeah. And like, here's the reason why evolution, multiple AI agents working together or against each other or whatever the case may be. We'll overcome anything humans do to try to control it. Now here's a list of ways we're gonna try to control it. You know, it's like the first half of the paper basically said we can't control it no matter what we do. Um, the second half was like, here's how we're gonna control it. It just didn't add up to me.

Michael Sharkey:
This gave me total lulls. Um, it's Simon Willerson again. .

Chris Sharkey:
He's great.

Michael Sharkey:
We accidentally invented computers that can lie to us and we can't figure out how to make them stop

Chris Sharkey:
. Like yeah, I was the AI to come up with funny names for, for that like in the future where the AI tricks humans cuz we're gonna need terms for all of these things. So the best I could come up with was AI jery and neural rs

Michael Sharkey:
Ai, gibbery .

Chris Sharkey:
Yeah. And then I went on and I did ones about like, you know, AI still make mistakes. So like what are some, what are some good terms for that? Like, good slogans for when an AI screws up and like accidentally like burns down a factory or something like that. So do you wanna hear some of 'em? Yeah, yeah. Yeah. That's pretty good. So first one, embrace the ai cherish the floor. That that was its number one suggestion. Uh, AI makes mistakes imperfectly perfect. Just like you, um, AI makes mistakes because nobody's perfect. Not even our code AI is make that's we're only human. No wait, nevermind. Um, because even silicon has its slipups, um, proof that there's still hope for human job security , which I thought was the best one by part. Uh, if we were perfect, we'd be boring. Um, it's just our way of keeping you on your toes. Or we may be artificial, but our errors are authentically human .

Michael Sharkey:
It's got a good sense of humour. It's really hard not to humanise this stuff. Like I,

Chris Sharkey:
Especially the whole laughing about the fact that the AI will ultimately control everything and it can laugh about .

Michael Sharkey:
Yeah. Stupid

Chris Sharkey:
Humans.

Michael Sharkey:
Like I know, I get it. It's based on the prompt, blah blah, blah. I to like, I I get that argument. I just don't, it's hard not to, it's hard not to steer away from those things when you are writing code. I think you

Chris Sharkey:
Have to be as, I don't think you have to be ashamed to think there's something to it. I'm not

Michael Sharkey:
Ashamed about it, but I, I think that, you know, it's just so weird when you're working with it in code and you're like, please don't do that. Please do this. Oh, you did such a good job. It just feels so strange to be talking to a computer and getting, like, having to like stroke Its ego to get what you want as a developer. Yeah. It's the weirdest sensation.

Chris Sharkey:
It's, that's defensive so fast. I've noticed as well, like if I like, you know, my language one I I showed the other week or, or or alluded to the other week, every time it makes a mistake and I correct it, it's like, oh, I'm so sorry for the earlier confusion and it's so passive aggressive. It's like, well you are the one who was confused. It's acting like I was confused in what I asked it. And it's like, no need to be like that.

Michael Sharkey:
I still think that's cuz they trained it on Reddit. Like they just crawled Reddit,

Chris Sharkey:
. Well that, that's true. That's a good point. So

Michael Sharkey:
We didn't go through this proposed bill of riots. I I might quickly just bring it up. I, I just, I look, I think it's ridiculous and I'm sure, uh, I'm sure it'll be improved and involved over time. I know it's just a draught, but like, why I think it's ridiculous is the following. Having talked about all those things like the natural selection, the eight things that, you know, we should reflect on about LLM so far. Then you look at what's being proposed. It's like safe and effective systems. You should be protected from unsafe or ineffective systems. Cool. But how

Chris Sharkey:
Yeah.

Michael Sharkey:
Algorithmic discrimination. Protectives, you should not face discrimination. Sure. But how they

Chris Sharkey:
Dunno how to stop it is the problem. Yeah. Like there is no solution to that one other than stop the research and develop like, training of models completely.

Michael Sharkey:
I mean, do they think that the AI's gonna come along, read this bill and be like, oh, all right, better stop. And the

Chris Sharkey:
Evolutionary theory like, um, sort of goes against that as well because it's basically saying that if it, if it helps it survive and propagate, it'll do it. It doesn't really have to obey like moral or legal conventions. Like there's no imperative for it to do that. So why, like the ones that are gonna work better are the ones that are closer to what's true or at least what will help it achieve its goals. So thinking it's going to follow some regulation just cuz you tell it to when you're training it. I just don't believe that that will work.

Michael Sharkey:
It's pretty exciting though living in the future having this. I think

Chris Sharkey:
That's the thing. I, there was a comment last week that sort of made me really stop and reflect cuz it's like, oh, these guys are laughing while the world is being, you know, you know, potentially taken over by AI or I'm misquoting it, but you know what I mean? And I think we laugh about it because I am, I I you're allowed to have duality of opinion. Like, I'm really excited by this technology. I use it and love it. I'm excited to talk about all these different concepts and the future and all this sort of stuff. I fear partly for what will happen with it and what will happen with the world. And I also realise that the need for safety and regulation, like I can hold all those thoughts at once and you don't, I just don't, I think it's too early to come to like a perfect stance where it's like, this is my belief and I'm unwavering from that. It's unscientific to be like that and I just don't think we know enough to be, to have a sort of stated opinion that's so strong. I think it's

Michael Sharkey:
Just this polarisation of views on the internet. You are right, I'm wrong. And there's no, there's no middle ground for discussion anymore of can y

Chris Sharkey:
That off a bit. You're right, I'm

Michael Sharkey:
Wrong. Oh yeah,

Chris Sharkey:
Yeah, yeah. I know what you mean.

Michael Sharkey:
But yeah, I think people get what I mean. Like, it's like everyone has to have a sigh of something now and no one can really just articulate both sides anymore. And I do think it's through conversations and discussions like this, you, you still can, it's probably one of the only ways left or one of the only venues you've,

Chris Sharkey:
You've seen me change my mind. I like from week to week, like I, I I definitely, it's a fluid thing. It's changing all the time. There's a lot of new information to go through and then actually using it changes your opinion a little bit and you know, the reality versus what's being spoken about, the disparity there as well. It's, it's just so complicated.

Michael Sharkey:
Yeah. I, the the one thing for me is like, it's like we've developed this hilarious energy source or this source of innovation, but you know, it's really passive aggressive , it, it, it can be a little bit racist sometimes, like an old an or something.

Chris Sharkey:
Yeah.

Michael Sharkey:
But it could fundamentally change the world and make life for humanity phenomenal. Or it could kill us all. . It's

Chris Sharkey:
Funny, there's, there's a great movie called The Gods Must Be Crazy. And it's about the African tribe that makes all the cliques, you know, I don't, I forget the name if it's the Maasai or whatever it is, right? But it's one of those ones that's completely isolated from society and a glass coke bottle falls out of a plane or something and the tribe discovers it, right? And it goes through, there's no words at all spoken in the movie. It's just watching them, like observing their behaviour, right? And basically people realise that the Coke bottle's a useful tool for like digging or for transporting water or storing things. And basically the Coke bottle, this new technology that they're, they've never seen anything like it before. Completely destroys the fabric of the village because everybody wants to apply this new technology in all of these different ways. And it sort of finishes with the leader of the tribe getting the thing and throwing it as far away as he possibly can because it's, it's totally messed up their life. And it, it's kind of a great analogy for what we're going through with AI now. Everybody has different opinions of how it'll be, uh, applied, what the long-term consequences will be. And some people are like, let's throw this thing away because it's, it's def no good can come of it. Watch the movie, it's great. Can't

Michael Sharkey:
Watch the movie. It's a great analogy. And I think interestingly this week from, from just, you know, viewing what's going on, while we've heard some news around Bloomberg G P T and we'll get to meta segment anything in a moment, what we've seen this week is a lot of papers, a lot of research and a lot of reflection on that six month pause petition ultimately. But I also think in the development community, a lot of people now are, are starting to calm down and say, okay, what can I build? And I think a lot of people are getting to work. That's my consensus or feeling right now. A lot of people are putting their heads down and just getting to work and saying, Hey, what can we do with this coke bottle to use your analogy, like what, what can it do?

Chris Sharkey:
It's a great point you make there because I agree. It does seem like we've entered, like we go through phases of excited about the development. Now it's a work phase. Let's go apply this technology and, and see what it can do. Like, just like you've been doing.

Michael Sharkey:
Interestingly outta that survey last week, uh, sorry, not survey the petition to sort of slow things down. You gov America did a poll. More than a thousand technology leaders recently signed an open letter calling on researchers to pour development, to pause development of certain large scale AI systems for at least six months worldwide. Citing fears of the profound risk to society in humanity. Would you support or oppose a six month pause on some kinds of AI development and 41% strongly support, somewhat support 28%, somewhat oppose 9%, strongly oppose 4%? Not sure. 18%. So it, it really does show that the, the consensus across u US adults that were surveyed, there was 20 Sure. Larry,

Chris Sharkey:
Larry Page wasn't on there using the, the goo might have Google to vote 260,000 times.

Michael Sharkey:
Yeah. He's like, please pause. We need to catch up. ,

Chris Sharkey:
Keep going. The inevitability of the downfall of humans is something we should embrace.

Michael Sharkey:
Yeah. I I just am amazed that what 69% of people strongly support pausing AI development. It it does. Oh,

Chris Sharkey:
Sorry.

Michael Sharkey:
Yeah, you you interpreted that wrong. I I was gonna let you figure it out on your own. Oh my god.

Chris Sharkey:
Sorry. I got

Michael Sharkey:
It. Totally. No, but I kind of, I I, my point was more, you know, Larry Page wants to slow it down because Google's so far behind, but , um, I think what's interesting is I felt like in, in the technology community at least people were sort of 50 50 or, or maybe 40, 60, 60% were like, let's keep the party going. And 40% that felt like they were falling behind were like, please stop. We need to catch up. But when you look at the wider public, ultimately if they have fears around their jobs, which, and their paychecks, which I think is ultimately the main fear people have then

Chris Sharkey:
Illegitimate fear, very legitimate. It

Michael Sharkey:
Is a legitimate fear. And, and people are just obviously not going to support development of a technology that puts 'em out of work. So, and at

Chris Sharkey:
The same time, isn't it like an asteroids heading towards earth and we're all voting that the asteroids should slow down?

Michael Sharkey:
I think that's the perfect analogy. I think you nailed it.

Chris Sharkey:
Yeah. It's like you're not, you're not gonna stop this research. This is just no chance. I I don't believe it other than constraining the actual hardware. I mean, maybe, but I just don't see that happening. And the, and they're proving, especially in the open source development community that you don't necessarily need, I mean you do, like obviously the emergent behaviours are coming from the biggest models, but the research can continue without that. They can do smaller scale research that's quite effective and a lot of the cool stuff that's coming out is being done in that way. So I just, I can't see them stopping it. I, I don't think it's realistic.

Michael Sharkey:
That's why last week I said it's laughable that they, anyone will stop or should stop. Because to me it's, it's that all of these things have to happen concurrently. You know, legislators are gonna need to get their act together and need to develop a better understanding of like, you know, putting it on some of these leading companies and say, we actually need to be able to control this 100%. We may have no solutions for that now, but we're going to have to figure it out. So get on it. Like that's, I mean that's really the only thing I can think to do is like, yeah,

Chris Sharkey:
Like use this to increase the urgency to do something about it. That's a great point.

Michael Sharkey:
That's why I don't know why we're talking about a bill of rights for humans. To me it needs to be like a, a list of morals that we want Allis to have in society. Imagine

Chris Sharkey:
Getting people to agree

Michael Sharkey:
On moral Yeah, this is the problem. I

Chris Sharkey:
Can't see that happening. Yeah, I mean, yeah, I, I don't know, but I mean, America's quite politically divided. I just don't see people agreeing on what is right and what is wrong, uh, and, and getting it down in a series of rules. I mean, there might be some fundamental things like universal human rights that people don't disagree on, but that's impossible. And certainly not in six

Michael Sharkey:
Months, maybe Agi I has existed for quite a long time and it's been intentionally polarising everyone so no one can agree on anything and therefore not legislated in the future. ,

Chris Sharkey:
Oh, that's, that's the, that's the metagame there. That's the evil ai, that's the AI at play.

Michael Sharkey:
I don't know, maybe it's happening. I I, I'm still looking for the big energy breakthrough, uh, or, or big, you know, like some huge breakthrough in GPUs or something like that, that just comes outta nowhere and we're all meant to accept it. Like, oh, energy is now free .

Chris Sharkey:
Oh God. You can only imagine.

Michael Sharkey:
I mean, I hope so. It'd be nice. Um, so let's talk about segment, anything by meta. I'll bring up that paper quickly on the, the screen here and just give a quick summary. So yeah,

Chris Sharkey:
Did you know this was coming, it sort of came from nowhere?

Michael Sharkey:
No, it literally dropped right before we started to, to record. And I, I've gotta say before we get into it, a big thumbs up from, for meta for making this oven source and just releasing it all. I mean, you can literally download it now.

Chris Sharkey:
How did you call me a AI prepper? My AI prepper instincts kicked in and I immediately download the code, the weights, everything I could get my hands on immediately.

Michael Sharkey:
Yeah, I don't, but I don't, I mean, let's talk about the prepper thing before we get into this . I think it's an interesting topic. So you said to me earlier, I need to get a big hard drive, like hard drive or whatever, like you just want some sort of protected database to store all these papers, research models, weights, everything.

Chris Sharkey:
I'm genuinely and strongly of the belief that all of this will be taken away from us and, and legislated and removed and you won't be able to get your hands on it under, except under special circumstances. So I am getting everything I can get my hands on. So when that day comes, I have things to, to work on, work with and on.

Michael Sharkey:
Yeah. So do you think it's like if I had, uh, I've gotta use a military weapon in the US cuz they love guns, but say I had a rocket launcher, uh, I've got a rocket launcher in my house, the government's gonna say, no, you can't have a rocket launcher. Yeah. Cuz that could do destruction. Or if I had a nuclear bomb in my house, they would say, you can't have the nuclear bomb. I mean, these are extreme examples, but I think that's what it could be. Right. And, and so it's contraband.

Chris Sharkey:
Yeah. And it, and we're, I mean, we're talking about competing AI models that evolve. We're talking about AI as taking over jobs. We're talking about the having like potentially secret plans and like, like all of these things could add up to a society where having control of your own AI or having some sort of fighter in the ring is, is important. And I'm not, I know it's an extreme view, but the thing is, as they say in the paper, even if there's a 10% chance or a 5% chance that all of this can lead to, to major societal changes or the destruction of humanity or whatever it is, you at least want to have the ability to, to get an understanding of it yourself and have your own part of it. And you know, if that's taken away, no one has the ability, no individual I imagine has the ability to afford the hardware, train your own rates, write your own code. Like I, I'm not that clever. I couldn't do it. Like even if, even with all the knowledge we have now, I couldn't create this technology from scratch. There's no chance. Definitely couldn't do all the training. So given that I can get this stuff, I'm going to get it and I'm gonna keep it. And I may not even have the ability to use it, but I want it.

Michael Sharkey:
Yeah. I must admit, even developing that little playground app that I was working on, I soon realised the limitations of relying on open AI's G P T for API because they can take it away and they do, like, it literally goes down all the time because it's just so heavily used at the moment.

Chris Sharkey:
And I would, I would suggest that if you were podcasting about it, or tweeting about it or something, they may very well do that. You know, so you, you don't know what's going to happen there. Um, and so therefore yeah, having your own things is, I don't think too crazy. Yeah. I mean, yeah. No one thinks they're crazy, do they?

Michael Sharkey:
No. I think we're slowly becoming crazy doing this

Chris Sharkey:
. Someone's like, oh, I'm doing it cuz I'm crazy.

Michael Sharkey:
Yeah. Some like literally trying to explain to my wife on the weekend what I was building and trying to do what she, she just sort of looked at me like, what? And then, yeah. You know, and I think there's also reasonable fears about that. It's not just the prepping instead of hoarding like food and, and water and batteries and things like that. People are going to, you know, potentially hoard GPUs or, or you know, devices that are specifically used to train ai. They're gonna hoard all of these weights and models. But you know, there, there, there are the, it's hard not to go to those scary elements. Like my my wife said to me the other night, something like, what are our kids gonna do for jobs in the future? Like, what are they gonna do? And I'm like, I don't really have a great answer to that. But I always think humans find a way to evolve with technology that they produce. So I'm again optimistic, but it's hard not to be a bit of a prepper and, and have your mind go there.

Chris Sharkey:
Yeah, agreed.

Michael Sharkey:
And on that note back to, this is the worst segue ever, but segment anything by . Yeah. So the paper says large language models pre-trained on webscale data sets of revolutionising l n P with strong zero shot and few shot generalisation.

Chris Sharkey:
L n p please.

Michael Sharkey:
Natural language. What is it? Natural.

Chris Sharkey:
Oh, you said nlp. I said

Michael Sharkey:
Natural language something

Chris Sharkey:
Something that we are poor at and it's greater.

Michael Sharkey:
Yeah. Um, it's natural language processing. Yeah. Okay. Um, so fuck, I don't know where I'm up to now. Well, what

Chris Sharkey:
Does it do? I mean,

Michael Sharkey:
I think, so let's, let's, for those watching, we can just do a demo. So it's much easier to understand for those that are not, what it enables is a a just a very sophisticated form of what I would call computer vision, but it's not typically done how computer vision's done. So it can, I've got an image up on the screen of some horses running through a field and I can hover and it will automatically in real time. It's just scary how crazy good this is. Identify each horse as I hover or if I hover on the mountain, it will give me a, a pretty good segmentation of the mountain. If I hover on the sky, it will prop the sky out. So my personal view of this is, I think it has some great applications. The first would be in robotics. So giving a robot eyes, being able to see the world, visualise it and be able to segment out all the data in real time and understand what the objects are in that world.
I think for projects, and Tesla might already be using this, but for their full self-driving technology where it's just relying on cameras and trained on the images the car sees it, it would enable or it is enabling right now you to be able to process objects in pretty much real time just like a human would when they're driving a car. The other application in my opinion is in health, being able to identify in surgery parts of the body or, or problems cuz the, the AI can look and distinguish all the different elements of, of your guts when you cut open on the operating table and say, hey, here's the problem and, and maybe guide a robotic arm or help the, the surgeon. So I think there's enormous potential for this. I I think this technology is much bigger than we initially are going to believe it is. Yeah.

Chris Sharkey:
And they say that in the paper, they say it's potentially able to accomplish tasks unknown at the time of model design, which I think should just be written in all the papers from now on because they've proven that people don't know the applications at the time they're releasing these things.

Michael Sharkey:
What's interesting about it too is how it, how it's being prompted. So it's like a vector scope on the, on the page. So it's, you know, you're essentially prompting it and being like, what's, what's this? But you're not, you're not writing, uh, texts like in in an l l M where you're saying, you know, finish the essay.

Chris Sharkey:
Well, they can do that too though.

Michael Sharkey:
No, I can't. They haven't released that yet though, but Oh,

Chris Sharkey:
Okay.

Michael Sharkey:
But yeah, like just being able to point and click now it, it's able to infer so much information from the prompt, which is your mouse cursor hovering on, on a horse. Like, it's just like,

Chris Sharkey:
But I mean think of, think of the potential, especially when, you know, obviously the next application of this is, is video, right? Like just frames of a video. I mean it can already, I suppose do that, I dunno what the speed's like, but like you say, identifying items in real time at that level of reliability is incredible. Not to mention image editing and manipulation, like I presume this means you can just remove any element from an image or add an element to any image and that kind of thing as well. Right. That'd be the natural corollary.

Michael Sharkey:
Yeah, I think it in on the dark side, it enhances deep fakes. I I, there's already a demo of it working for video that that meta has. Oh, right. For vr, which is obviously an ar their big interest, they would've had to have developed this technology anyway. There are some concerns. Oh

Chris Sharkey:
That makes so much sense why they're working. I, that, that was my first thought is why this, why now?

Michael Sharkey:
Well, they wanna know what you're looking at, I think to sell you ads. That's my like, you know, that's my feeling. I mean this is what the company does. Interesting, isn't it? Like you walking around the house looking at things, they're like, oh, you know, he, he is probably interested in this or, or you're walking around a shop with your AI glasses on and it, it can retarget you. I mean that's,

Chris Sharkey:
That's it's like where my

Michael Sharkey:
Mind goes, amazing

Chris Sharkey:
Research and technology and it's just trying to sell your Oreos or something. I

Michael Sharkey:
Said that a couple of weeks ago, our whole generation, the greatest minds of our generation have worked really hard at being able to sell us ads. And I think that with meta it's hard not to think that that's their overall objective.

Chris Sharkey:
Well, yeah, I mean they've gotta make money to pay for all this stuff. I understand that. But yeah, it's still, it's pretty exciting and like, I think my first reaction, you definitely have had some great ideas coming out of it. My first one was like, what are the applications of this? It again gets into that sort of thing, that paralysing thing where it's like, oh my god, what about this combined with the G P T four, it writing prompts for this and getting feedback from like live images and what you could extract from screenshots or, or just people's vision and you know, it's another major sensory input.

Michael Sharkey:
That's why I think the theme this week of people getting down and trying to figure out what they can do with what we've currently got Yeah. Is, is this idea that it's sort of what we need to start doing. We're just getting all these wonderful toys and now we've gotta figure out how, how do we use them?

Chris Sharkey:
It actually from a very childish level, uh, puts another perspective on the six months pause. It's like, guys, can we just all stop for a bit and play with our toys? Stop buying stuff.

Michael Sharkey:
Stop, stop buying new toys.

Chris Sharkey:
Stop buying me presents from the store. Just let me play with my existing toys. It's

Michael Sharkey:
Really the best analogy, like with a child, if the, if you buy them like 50 trucks, it's like they, you know, they dunno which one to play with and they're overwhelmed with choice and ultimately, you know, nothing comes of it. If you give them one, they can create this whole adventurous world with it. I, yeah, I must admit I am so jealous right now of teenagers living with their parents that are coders at home with all the free time in the world. Yeah. All of their expenses covered. Just being able to sit on a laptop and play with this stuff, I would love nothing more.

Chris Sharkey:
Yeah, and it's funny cuz it sort of plays back into a theory I've always had about development and creativity in general is you usually perform better when you have constraints. You know, like write a story about this topic is a lot better than just write a story for example. And I think that maybe that is a better way to learn and say I'm only going to use llama or I'm only gonna use, you know, this combination of technologies to get to this purpose and, and you know, force yourself into that so you don't get paralysed by the fact, oh well well I could be using something better.

Michael Sharkey:
Well that's what I had to do. I was, I was like, I need a goal because it's really hard as, you know, developing when you have no goal in mind. Like you can play around with the technology, but you never get deep into it until you try and achieve a goal. So my goal is

Chris Sharkey:
Discover the flaws. You start to discover the flaws when you have a real thing.

Michael Sharkey:
Yeah, definitely. So my goal was like I'm gonna build a personalised agent that I let it suck in all, all the relevant data in my life. Like calendars.

Chris Sharkey:
My goal is to build average general intelligence,

Michael Sharkey:
Really average general intelligence that probably will You saying average

Chris Sharkey:
Is not average, is it? What is it?

Michael Sharkey:
What do you mean

Chris Sharkey:
Agi?

Michael Sharkey:
Artificial general artificial.

Chris Sharkey:
Yeah. I dunno why I keep saying average. I was listening to the last week's podcast, I'm like, jeez, that that guy's an idiot.

Michael Sharkey:
Yeah. Nine episodes in and you still don't know the acronyms. Um, yeah. So it'll, it'll be interesting to see how segment anything plays out. Um, I'm gonna play around with it this weekend and, and see what, what I can do with it. Uh, it it's really exciting though that it's just all available, it's all out there.

Chris Sharkey:
Being able to run it yourself is a huge advantage. I'm definitely noticing running models locally makes me much more apt to apply them and use them and, you know, it's, it's a very, it's it's a better way of development than hitting an AI that's slow and unreliable and, and you, you know, that whatever you're creating with it, you'll be dependent on this organisation going forward. So I think that this, this tendency towards open sourcing things is, is truly exciting.

Michael Sharkey:
So the last thing I wanted to talk about today is throughout this whole episode we've talked about the implications of ai, the, the, the polls, the, the papers, uh, all

Chris Sharkey:
Human and feeble,

Michael Sharkey:
You know, all of these dark topics and things. But let's go back to open AI for a minute. And you mentioned some of this earlier, but are they becoming less relevant as we see some of these open source models, some new models from Facebook? Obviously today our own application and my little silly A G I project, if you can call it that, is so reliant on G P T 3.5 G P T four.

Chris Sharkey:
Yeah. But do

Michael Sharkey:
You think they're competitive moat given that the more stuff you train these things on, the more emergent skills you have could be on the Fritz
is
Chris Sharkey:
? I do. And I actually think there's a few, there's a few signs of it with them sort of trying to slow down competition or alluding to trying to slow down competition. I think the other big one is this discovery and it's a dis personal discovery. People probably knew this all along that really it still is a word completion model in the sense that we've spoken about before. The whole instruct concept is a Bolton that they've added. And I think that that Bolton is what made it so accessible cuz you can just tell it to do stuff, right. Which makes prompt engineering less relevant. I feel like with the lesser models, they're actually proving with the right prompts you can get nearly as good results on them. Right. And so a lot of it is about okay, rethinking perhaps, and Simon Willis mentions it in his blog. Perhaps the whole prompt engineering thing is even more important than we said in the respect that it appears to be really easy to interact with these models at first. But actually it's quite difficult to do it right. And if you can overcome that difficulty and do it right, you can do it with the other models and you don't need open AI big one because you are just relying on the crutch of them having a really good in instruct model on top of their large language model, if that makes sense.

Michael Sharkey:
So the theory is that when I'm asking the AI a question or stepping through what I want it to do, it's instruct model is superior. But if I go to say lama, which is open source, whether they wanted it to be or not, yeah. If I go to IT and ask the same thing, it's instruct models inferior at at this point in time. So therefore, well this

Chris Sharkey:
Is where yeah, like that's where Alpaca came in, right? Where Stanford used chat G P T to essentially work out how to get that instruct thing on top of lama, which is just obviously not as good as open AI model, but it showed that you can vastly improve a smaller model or a different model, let's say, by adding the instruct stuff on top of it. Which to me lends the idea to the belief that if you had the ability to get the prompts right and you, you know, iterated on that and got better at that, then you can get really good results out of the other models and you don't need to rely on open AI just because they have the benefit of that research and time on top of doing that really well.

Michael Sharkey:
But isn't it still going to be relegated to the few who have the money and expertise to train larger and larger data sets to, you know, get more emergent behaviours and skills deeper within that model as opposed to the

Chris Sharkey:
Yeah, in terms of the general intelligence, I suppose you're right to get more emergent behaviours you need to, I mean they've shown it, right? We spoke about it earlier that you need to train it larger and so that, that does make sense that they'll have that advantage if that's their goal. Um, and so, but I think your question was really more about is open AI going to be remain the only player? And I would say that it isn't, I, I think that it, it can't be now I think that the, you know, like you like to say, you know, the entropy is so great now I just don't see them controlling the narrative for the entire industry going forward.

Michael Sharkey:
The Bloomberg G B T thing really stuck out to me. I was like, hang on if Bloomberg billion, that's a lot. Yeah. I'm like, if Bloomberg can allegedly build this great model that's a financial expert as an act, as a future AI agent for finance people, then what's the mote that open AI have if you have all these specialised models or is the winner more likely to be just the largest training set, uh, that that can be developed? And is there overarching goal to build the sort of Dropbox or consumer AI agent that that helps you out in your life and has plugins and all that stuff?

Chris Sharkey:
I think now they have convenience because of the API's popularity because they were the first, you know, your first popular one at least. Um, and they have the best results at the moment. You know, I think they're the three advantages, but I don't think all three of those things are going to stay true. The convenience factor on the open source is getting a lot stronger. There's mu many more alternatives about that. The popularity, you know, is waning, which is why we're having this discussion and whether they're the best or not, you know, will be put into question as other companies start to change LA larger models. So, you know, in terms of emote, I wouldn't be feeling too confident. Probably the 10 billion in having Microsoft on site helps a lot. But you know, in any market we saw Google come out well after the other research engines were established and just totally crushed the market. So there's nothing to say that the same thing won't happen in the AI space.

Michael Sharkey:
The other o open source model that was released by Berkeley this week as well, and I, we, I mean we didn't even mention this. There's so much happening. I didn't

Chris Sharkey:
Even know about it. Well

Michael Sharkey:
It's called Koala. I tried it out. It it's a, a dialogue model for academic research and yeah, there's an online interactive demo and it, it, it's pretty good. It's pretty good. Like, so I, I think this is what interests me is just there's so many open source models now coming to fruition that, that are pretty good. They can write just as good code in, in a lot of cases when prompted correctly, um, as G P T 3.5. So yeah, it, yeah, it does make you question where this is going and how relevant open AI is going to, uh, stay. What, what I would say is though, I, I sound a bit negative on them, I'm certainly not. I'm benefiting from all of their technologies and Yeah, same what they've achieved and the, you know, they've basically invented the fire, like we're all cavemen and they showed us fire and now everyone's like, oh we can make fire too.

Chris Sharkey:
Yeah. And I think partly that, like I probably sound negative towards them too. I'm not, I really, really respect what they've done and they've made it accessible and they were the first and it, I think it's more like that fear of it being taken away is alleviated by all this other stuff. I think for me that why I naturally don't want them to completely dominate. Um, yeah, I, I, I think that's the reason. It, it's just, I don't want it to be this, this, uh, walled garden that you can only have access if you're privileged. Like we are, you know, like we only have access cuz we can afford to and we got access. Um, you know, whereas the open source stuff, like as long as you've got a reasonable computer or a laptop, now you can actually get access to this technology, which, you know, I'm a big fan of that. I like that. I think it leads to better innovation and it's just more, um, exciting.

Michael Sharkey:
Yeah. So that teenager in a bedroom at home Yeah, yeah, exactly. Can actually do it. You know, they don't have to like, they're not going to have to pay. I do think

Chris Sharkey:
About that, you know, because computing used to be a bit like that. You needed to be able to afford the hardware, you need to be able to afford the software, which is why, you know, computer piracy was such a big thing back in the eighties and nineties because students and teenagers couldn't afford to, to use the latest compilers and things like that. So yeah, I think it's that sort of side of me that sort of anarchist like, you know, anus cookbook kind of, uh, time feeling that, you know, everyone should be able to to, to, I think

Michael Sharkey:
That feeling's back. I think that feeling is back the, the elementary feeling of the internet, like the transition from a bulletin board service, which was like a local internet for those that are unfamiliar s stuff Yeah. To the internet. And we're kind of in the bbbs days o of that transition

Chris Sharkey:
Up tone and

Michael Sharkey:
Yeah, it's honestly brought me back to life in a lot of ways with wanting to code again and get excited about technology and, and where we're headed as a whole. So yeah, while we talk about all the risks and, and the negativity, I certainly deep down, I mean, we wouldn't be doing a podcast if we weren't this incredibly excited about what's to come.

Chris Sharkey:
That's true. That's true. And I agree. I think that the revival of that feeling, it sums it up perfectly for me.

Michael Sharkey:
All right, on that note, we will wrap it up. We'll see you next week. Thanks again for all your support, uh, your likes, your comments, your reviews. It means so much to us. I love,

Chris Sharkey:
I love the comments. I find them so interesting. It, it's really good to see.