This Day in AI Podcast

Neurolink's Brain Chip Gets FDA Approval, NVIDIA Stock Skyrockets, and OpenAI wants to regulate everyone but themselves.  We discuss LLM specialization and fine-tuned models like LIMA, Gorilla and ask is less training data more? We also discuss whether we would get the Neuralink brain implant. The future is arriving fast in AI this week and we try our best to keep you up to date.

----
If you like this podcast help us spread the word. Please consider leaving a comment, liking and subscribing if you haven't already.
----

00:00 - NVIDIA, Trillion Dollar AI Company?
08:42 - Neuralink gets FDA approval for Human Trials
15:30 - Can AI Read Your Thoughts? We Discuss Mind-Video
20:12 - Sam Altman's World Regulation Tour
23:25 - Governance of Superintelligence and Democratic AI
29:16 - Meta's MEGABYTE: The Future of AI?
34:36 - Specialized LLMs: Gorilla, LIMA
48:34 - Adobe's Generative Fill in Photoshop
54:23 - Microsoft CoPilot for Windows 11: Will it be Safe from Prompt Injection?
57:42 - AI Training use Game Mechanics & More of AI Gaming

----
Sources:
https://www.google.com/finance/quote/AMD:NASDAQ?window=5Y
https://twitter.com/neuralink/status/1661857379460468736
https://neuralink.com/
https://mind-video.com/
https://www.reuters.com/technology/openai-may-leave-eu-if-regulations-bite-ceo-2023-05-24/?utm_source=reddit.com
https://openai.com/blog/governance-of-superintelligence
https://openai.com/blog/democratic-inputs-to-ai
https://twitter.com/marvinvonhagen/status/1661735558065057798?s=20
https://twitter.com/ylecun/status/1661853891690987523?s=20
https://openai.com/research/universe
https://twitter.com/yusuf_i_mehdi/status/1661029509716676609?s=20
https://blogs.windows.com/windowsdeveloper/2023/05/23/bringing-the-power-of-ai-to-windows-11-unlocking-a-new-era-of-productivity-for-customers-and-developers-with-windows-copilot-and-dev-home/
https://arxiv.org/pdf/2305.15334.pdf
https://arxiv.org/pdf/2305.11206.pdf
https://twitter.com/Tom__Quan/status/1661326971207557122?s=20
https://www.adobe.com/products/photoshop/generative-fill.html#watch-video
https://www.artisana.ai/articles/meta-ai-unleashes-megabyte-a-revolutionary-scalable-model-architecture
https://github.com/fmaclen/julia-sanfrancisco/
https://julia.strictoaster.com/game/

What is This Day in AI Podcast?

This Day in AI Podcast is a podcast all about AI. It's an hour-long conversation on the influence and rise of AI in technology and society. Hosted by Michael and Chris Sharkey.

Michael Sharkey (00:00:00):
So Chris, I think it was in episode three of this podcast that we had a segment very early on how to make money in ai. And one of the things that we talked about was investing in potentially Nvidia and Tesla stock. And meanwhile, the video's, stock prices absolutely skyrocketed with their latest earnings report. And they're forecasting that the, the, you know, with AI and people buying these a 100 s for their server farms, all of the big tech companies, they're going to see huge increases in revenue, almost a hundred percent year on year growth in that particular area. And it's almost hit a trillion dollar market cap. So my question is, did you take our advice in advance?

Chris Sharkey (00:00:53):
I don't listen to anything we say on this podcast, Mike, you know that ?

Michael Sharkey (00:00:57):
Oh man.

Chris Sharkey (00:00:58):
Oh man, why, why didn't we do it? That's crazy and fairly predictable. We should

Michael Sharkey (00:01:02):
Have taken our own advice and literally sold the house with this thing. But anyway, it's almost now a trillion dollar, uh, market cap, and it looks like there's no sign of it slowing down. The interesting part of this though for me is do you think this is just a case of we're in a, a pretty down market and everyone's looking for the hope for growth, and so everyone's thinking, well, AI and just throwing everything into it.

Chris Sharkey (00:01:29):
The interesting thing about NVIDIA's stock prices, it's always been seen, seen in the finance communities, is somewhat of a barometer of the exuberance of the market. Like when NVIDIA's like cranking, then it's like, okay, there's a lot of cheap capital and people are just trying to put their money somewhere. Whereas this time it's almost like a counter thing where the market's sort of a bit turbulent and yet it's, it's exploded just due to great business rather than it being a sort of speculative thing. It seems like it's going up with merit.

Michael Sharkey (00:01:58):
Yeah, it's interesting looking at the spikes, you're saying that, and it's a good observation, is that November, 2021, it, it really peaked and that was sort of coming off the web three crypto hype and boom. And now you just see and the

Chris Sharkey (00:02:11):
Chip shortage as well.

Michael Sharkey (00:02:12):
Oh, chip shortage. Yeah. Too. And then now you just see it's just, it's just straight up. So do you think this, I

Chris Sharkey (00:02:19):
Mean, look at, look at our own discussions about it. Like I, I'm constantly thinking I want better and better graphics cards myself, because the prevalence of models you can train yourself is exploding. And we'll talk about this later on today's podcast. The new models that are, can be trained in 24 hours on your own hardware. I can see why more people, not just businesses, small businesses, big businesses, everyone wants their own hardware to run their own models. So I don't see it slowing down, honestly. Like they're making the best graphics cards that all this stuff's designed on NVIDIA's cuter architecture. So, you know, like everyone needs these things. It's not, this isn't a, a fad, this is a, a, a, a big thing that's here to stay.

Michael Sharkey (00:03:03):
So do you think, having said that, like if you were to make a further prediction and we come back into another 10 episodes when we still haven't invested , it's dribbled again. Do you think that, I mean,

Chris Sharkey (00:03:13):
It's, it's always that thing, right? You don't really want to buy shares after it's gone up like a billion percent. But at the same time, uh, it's, you know, at some point its valuation is gonna reach fair value. And if that fair value's going up all the time, then maybe it is worth it.

Michael Sharkey (00:03:29):
Yeah. It seems like they're the only ones right now selling the tools for the AI era.

Chris Sharkey (00:03:35):
Yeah, and it's like back in the, you know, 1980s going, I'm not paying $40,000 for a block of land in Sydney. That's crazy. You know, now it's like 2 million for like an old burn down shack or something like that.

Michael Sharkey (00:03:48):
It'll be interesting to see. I know Microsoft and Alphabet, which is essentially Google have gone up as well. These are the three stocks that seem to be getting all of the money flowing in from speculation about ai. Um, I, I think it's interesting with Microsoft and Google just because I don't see how it leads to huge growth. If anything, I think for Google, it could lead to a decline in ad revenue. So it may not,

Chris Sharkey (00:04:14):
Microsoft doesn't surprise me because I think Microsoft over the last few years has made some brilliant acquisitions. Like, you know, buying LinkedIn was really cool, their investment in open ai. Obviously I, regardless of what I think of open ai, like it was a very, very shrewd move to get in on that. And just the software they're putting out is just great quality. Like I'm a huge Microsoft fan. Um, whereas Google, I don't know, like, it, it, that seems more like a sort of everyone just hopping on the bandwagon. They're not exactly leading in any aspect of, of anything right now, especially ai.

Michael Sharkey (00:04:48):
I think the other stock that could see a huge skyrocket effect is Tesla, which is not that associated with the AI push right now. But if you think about the economic impact of them cracking, self-driving using AI and all the other applications with their robots and all these other pieces coming together, it's a definitely a longer term bet, but it seems like the other stock that could 10 x or skyrocket into the future because

Chris Sharkey (00:05:18):
Yeah. You know, the other aspect there, I think people just have such a positive affinity with the te Tesla brand. I mean, you've got kids, right? Like my kids, they know you have a Tesla, like every time they see one on the road, they're like, look at Tesla, uncle Mike. And then they'll talk about all of the merits of why Teslas are amazing, including ai, including self-driving, including the, the better technology in it. And I know that's a sort of n equals one kind of case, me just having a personal experience, but it seems like Tesla as a brand is associated with all things good when it comes to technology, SpaceX, ai, all of that stuff. So yeah, I, I agree. I think if they do make movements in that area, it will will come with a very positive response.

Michael Sharkey (00:06:00):
Yeah. If they crack full self-driving and it literally leads to you can get a new car, have it drive you to a restaurant and legally get drunk and then it drives you home. I just can't see how that is just not the most valuable company on planet earth

Chris Sharkey (00:06:16):
Terrible for our health. I guess you invest in the health companies as well for the inevitable decline of humanity. Yeah. Like, uh, making money there and on the AI side as well.

Michael Sharkey (00:06:25):
Yeah.

Chris Sharkey (00:06:26):
What we, what we really need to do is invest more time in our stock predicting bot and give it all of the information and see what it thinks. Like has Nvidia got further to run, given what you know about the development pace of ai, the use of the new models that are coming out, and what hardware's going to be needed in order to do that. Because, you know, some of the models are showing you don't need as much hardware to train the big LLMs, but you know, on the other hand, more people will be doing it. So, um, it, it stands to reason that the demand for these GPUs is just gonna explode. And, um, it's not easy, it's not easy to spin up a factory that can pump out graphics cards like this. You know, in, in normal economics you think, okay, well the market grows, so more players come in and they have competition, but this is an industry that's notoriously difficult to get the machinery to build the stuff, to advance the technology. So it isn't a case where someone can just go, oh, I'm starting a new G P U company. Um, they'll be shipping next year. They, they've got a, a big moat

Michael Sharkey (00:07:26):
For, yeah, I think for AI manufacturing as well, it's probably why the US government's trying to push a lot of the chip manufacturing to the US mainland out of Taiwan because that's where nearly I think all of the manufacturing for Nvidia is happening today. So it, it could, with the rise of AI lead to, uh, a, a geopolitical battle literally over AI and having access to these chips, that's how important they're becoming.

Chris Sharkey (00:07:49):
Yeah, and I think the other stock to keep your eye on obviously is a M D because they're the only legitimate other player who can build, uh, machines that, uh, can compete. So I note their stock price went up 11%, I don't know in what, like last 24 hours I assume, but it's definitely, definitely rising and you could see that there would be growth there as well. Yeah,

Michael Sharkey (00:08:10):
It looks like it hasn't hit its sort of peak like crypto chip shortage, uh, December, 2021, uh, price. So it hasn't exceeded it yet, so it's definitely behind, but it is, uh, it's straight up. And I think the other one's probably Intel as well if they get their act together, but Intel just seems like they've kind of lost the plot in the last couple of years.

Chris Sharkey (00:08:31):
Yeah, but I mean like, just generally correlated, like if you need more GPUs, you need CPUs and, and computers to put 'em in. So I could see Intel doing very well. , they're not going anywhere, that's for sure.

Michael Sharkey (00:08:42):
So neurolink just an hour ago announced that they're excited to share that we've received the FDA's approval to launch our first inhuman clinical study. Inhuman, inhuman, well, it was like in CHIMP before

Chris Sharkey (00:09:00):
. Yeah, the first evil ai, the first evil implants in the brain.

Michael Sharkey (00:09:06):
So for those of you unaware, neurolink is literally a, a chip or an interface with the brain. And, and the idea behind it is that you can communicate with the brain. So there's all of these different implications where they could potentially allow people to walk again by helping with, with areas of the spine that are blocked to get signals through. So you're

Chris Sharkey (00:09:29):
Saying it's interactive, it's not just, it's scanning your brain. It can actually put input into your

Michael Sharkey (00:09:34):
Brain. It, uh, the way I've, I've sort of understood it is like a programming interface for the brain. So you could actually like, you know, programme out bad behaviours, potentially unad people from substances by just telling them to stop . Um, and, and yeah, I think there's many applications, but one of the ones that Elon Musk cited it in a presentation from some time ago now, was that with ai, the only way we'll be able to compete with super intelligence and AGI is to become AI ourselves. So we have a chip in our brain, um, you know, and we're walking around saying as a large language model, I can't, can't do that. ,

Chris Sharkey (00:10:15):
Oh my God, I gotta, I gotta start using that one. I don't want to do chores in the house as a large language model. I am unable to sweep the floor because I

Michael Sharkey (00:10:23):
I actually wrote it on Mom's Mother's Day card. Cause I know she listens to the show. So I wrote as a large language model. I can't write your Mother's Day card for you, but I can suggest some things about your mother .

Chris Sharkey (00:10:34):
That's brilliant. I love it. I love it. But yeah, so this idea of sort of computer plus human, we've got a human brain and we have access to ai, so we can still stay ahead of the AI for a little bit longer.

Michael Sharkey (00:10:46):
The question is, would you actually be one of the first to put this chip in your brain? Uh, yeah.

Chris Sharkey (00:10:50):
It's really funny you say that because as you're mentioning that, I never thought about the two-way thing. Like I understand there's these ones that can get your brain scans and make, you know, uh, work out what you're thinking about to some degree now, but I never thought about it providing input in the other direction. It's quite scary because, um, you know, that's, that's you, it's your, it's your soul, it's your humanity. And like having something, just getting up there and fiddling with the wires is a bit a bit concerning. Like, I definitely wouldn't be, uh, the first guy to do

Michael Sharkey (00:11:19):
It. . Yeah. I guess you could turn humans into like a robot army, just completely change what they think. Uh, and

Chris Sharkey (00:11:26):
Well, yeah, and it makes, it makes sense to me that people with nothing to lose, like, you know, if you've got a neurological disorder disorder or say you're a quadriplegic and it has an actual hope to get you moving again, I could see that the, they would be the first people who would be more than willing to try it because, you know, it, it, it's only upside for you, um, if you're in that. I, I assume I shouldn't assume how people feel, but, um, but I would, if, if I was in that situation where I had some changing disability and this had the chance to reverse or improve that situation, I'd be much more apt to try it than just I want to outsmart the future ai.

Michael Sharkey (00:12:02):
Yeah. I, I don't, I think that's obviously like the, maybe the longer term goal that, that humanity can compete, but the, the short term application's really interesting. I think the other thing that gets me excited about this technology is, you know, you could potentially download people's brains eventually, like download the brain and then run it, you know, in the cloud . So you could technically live forever. You mightn't have a body, but your brain sort of like, was it Fu Futurama where they had the, the people's heads encapsulated in shells? Yeah. And they sort of lived forever. So maybe, maybe this is like the, the early signs of, of that. Um, also I think for educating humans, if you think how long it takes to educate your kids, uh, you could just give them all the world's knowledge instantly. Potential. I don't know. I mean, I think that's something that could happen.

Chris Sharkey (00:12:55):
Yeah, there's a book I read about this, I'm trying to remember the name of it, but, um, it, it, it, the book starts with a guy waking up and realising his arms don't work and his legs don't work and nothing works. And it's actually him waking up being inside a computer. And it's this bizarre alternate universe where the Catholics took over the world. I don't know where that came from. Um, and the, the planet becomes like super, you know, strict and all this sort of stuff, and he gets sent away on a spaceship with his mind, and then he starts duplicating his mind into all these other spaceships. There's like a hundred of him, and they all have conversations and things like that. And it's weird that now they're starting to get the technology that conceivable could conceivably get to that point if it, if it continues.

Michael Sharkey (00:13:38):
Yeah. It, I think if AGI or Superin intelligence, it is the, the big breakthrough we think it is, then the only way, as we've talked about on the show before, for us to understand those breakthroughs is probably to partner with AI and become part ai, part human. I'm not sure how I personally feel about it , like, it just seems to like be a weird future, but it's probably the most likely.

Chris Sharkey (00:14:05):
Yeah. And weirdly, weirdly, all the stuff we talk about with the AI gaining, you know, like understanding meaning and actually having emergent behaviours, I find that exciting. This one I find unsettling. I think this is a bit more, bit more exist. Well, it is existential in nature and yeah, it's, it's a bit odd that they're, that this technology's rising at the same time the AI's rising, you know, they've got the interface and they have the intelligence to partner with it. So it's both things together that has the significance. If it

Michael Sharkey (00:14:34):
Was stop being speciesist, whatever, , yeah.

Chris Sharkey (00:14:37):
If it was just like, you know, I can press this part of your brain and then your arm moves, then okay, that's one thing that's like an electrical mechanical thing. But the fact that you've got an intelligence that could actually understand what's going on in a level we can't in the brain and then work out how to, to operate the brain, I think that's really where it comes into play. Like if you've got a model that is trained on brain signals and it can have, you know, I guess emergent behavioural as we've discussed, it can see meaning where we can't in that information, the actual potential of that would explode rapidly. Whereas if it's just us trying to work out how to control it, like we've hacked into a car or something and trying to play with the can bus, it's a bit different because it, it has far more potential to control it.

Michael Sharkey (00:15:24):
Yeah. I think that's probably the scary element of it. Uh, speaking of more scary elements, uh, , there's a, a paper on mine video, uh, using an augmented stable diffusion model that that was released this week. And it uses F M R I image reconstruction, uh, to essentially they put you into a, a live mri. So you're in the tube, you know the tube you go into, into Sure,

Chris Sharkey (00:15:57):
That's sure, that's great for you, Yes. Sit in there for a day, what could go wrong?

Michael Sharkey (00:16:02):
Yeah. And they, they basically feed up this video to you, and one of the examples is a cat a a cat video of this cat moving around on screen, and then they, uh, encode the, the signals from the brain into something that their model can understand. Uh, and then they decode that image back from your brain. So basically you watch the video of the cat, your brain interprets that, they read your interpretation, decode that from your brain, and then try and reconstruct the video using a stable diffusion model like in, in frames. And it, I've got it up on the screen for those that watch. But for those listening, you've got the, uh, visual stimuli, which is the cat, and then below it you've got the reconstructed video from one of these F MRIs and it's pretty damn close .

Chris Sharkey (00:16:54):
That's absolutely astonish. So

Michael Sharkey (00:16:56):
It means that you could, in theory, uh, well this is because your brain's processing that thought at a given time, but you can visualise what,

Chris Sharkey (00:17:05):
What, I mean, what have they trained this on, that it can have a visual representation of the chemical signals in your brain. That's just, that's hard to believe. That's crazy.

Michael Sharkey (00:17:15):
I think it's the, the brain waves. I mean, I have literally no idea how it's being done. I'm sure there's someone that could explain it.

Chris Sharkey (00:17:23):
I could understand it, recognising previous thought patterns and go, okay, when he thinks about a cat, this is what it tends to happen in the brain. You know, like they get excited or maybe disappointed. I don't know what reactions people have to see a car. I'm really, I'm just not happy with that. But, um, but, but to actually reconstruct it visually, that's, that's just unbelievable. And it really gives credence to what we were just saying, like that the AI's gonna be a fa far better and understand, like if you just showed a human a bunch of pictures of M r I, you could train someone for their whole life to interpret m r i images and they're not gonna go, this is the cat they were looking at. Like, that's just crazy that they could

Michael Sharkey (00:18:01):
Do that. Yeah, I mean, obviously relying on brain activity from an r i realtime scan, you're not gonna live your life being scanned by Mr. Uh, MRIs, but maybe Neurolink in your brain can give a live feed of your memory. And this is what

Chris Sharkey (00:18:16):
I was saying earlier, it's like the, it's the simultaneous rise of these technologies that's very significant because it's like you've got on one hand the actual, uh, you know, the, the interface. And then on the other hand, the the, the models can actually interpret it. I don't know how fast this happened. Is it, is it real time or does it take

Michael Sharkey (00:18:34):
A while? No, and they cite limitations in the paper of it. It takes ages. Like it's not something you can do in real time. And I mean,

Chris Sharkey (00:18:42):
Not, not now. It's an inevitable, if they can do it at, at slow speed now they can do it at a fast speed later by, by a video. Well, I think

Michael Sharkey (00:18:49):
That's why I was really interested in linking it to this NEUROLINK news because it, it truly is that you can just imagine where it goes. Like it's so obvious what, what can happen with this technology? And essentially with neurolink putting the io you know, like having an IO interface with the brain, I'm actually curious if you're watching on YouTube, pop it in the comments like, how early would you sign up and have neurolink in your brain? Or, or do you think this is a terrible idea? I I don't know how I feel I probably would get it in my brain eventually.

Chris Sharkey (00:19:21):
I mean, maybe it's, it's fine maybe on an individual basis if you have extreme limits on what it can control in your brain, but think about the potential for, I think you mentioned it earlier, like soldiers and things like that. Like if you can fully control a human and fully understand their thoughts, there's some really, really terrifying ramifications of what could be done in both directions, either through controlling people or by monitoring their thoughts.

Michael Sharkey (00:19:48):
You also probably don't need the robot armies that everyone e expects. Maybe the robot armies are just people .

Chris Sharkey (00:19:55):
Oh, man. Give them exoskeleton. Oh wow. It's, it's a real future. These guys making the movies were really onto something, right? Like . Yeah, . It's like this sort of, yeah, art imitating life, life imitating art kind of cycle, but yeah. Wow, that's really rapid and, and terrifying.

Michael Sharkey (00:20:12):
So in open AI news this week there, there's a lot to cover this week. It's been one of those weeks where it just, its been non-stop fire hose type type, yeah. Uh, announcements and, and one of those I just caught my eye and I couldn't stop laughing about it, was guy who wanted AI regulation last week now saying, open AI may leave the EU if regulations bite .

Chris Sharkey (00:20:40):
That's so funny, isn't it? Yeah. Does he

Michael Sharkey (00:20:43):
Understand how stupid this sounds?

Chris Sharkey (00:20:46):
? No. He wants the regulators that he has influence over to regulate him, not the ones that, uh, uh, that actually would affect him. Like he wants to regulate everyone else, not himself.

Michael Sharkey (00:20:56):
Yeah. So Ulman at an event in London this week, literally said, and I quote, the current draught of the EU AI Act would be over-regulating, but we have heard it's going to get pulled back. He told Reuters, they're still thinking about it. So basically he he goes on to say that, you know, they'd have to pull right back from the EU if they regulate it. Uh, i, it honestly, it just seems like, yeah, my regulation, my rules, my control here, or, or you get nothing like it, I don't get it. I wish they would just focus on innovating and releasing great products, not trying to be gods. Yeah.

Chris Sharkey (00:21:32):
The thing that has been affecting my thinking a lot in the last two weeks, and especially this week, is this would all be fine if G P T four was still the bastion of the absolute best large language model you can get. But it's not anymore. Like, I think I've been using Claude a lot in the last week and clawed for everyone who knows this Anthropics large language model and the clawed a hundred K prompt size, it's so good. And it can do it over a really, really large prompt size, and we'll get to it soon, but Meta's announcement about their own model that's going to be coming out soon as well. Um, I just don't think that he has the leverage that he thinks he does anymore. Like open AI is great and they're, they're, they keep releasing new things and they really had a big influence at the start of this thing, but it isn't like they're so head and shoulders above everyone else anymore that they can go dictating terms to entire continents.

Michael Sharkey (00:22:27):
I think as a consumer product though, it's obviously very prevalent, like, you know, yeah, everyone knows chat, G B T, maybe they haven't used it, but they know it exists. Uh, so every

Chris Sharkey (00:22:39):
Everybody knows about it. Old, young, everybody's using it. Yeah, that's true.

Michael Sharkey (00:22:44):
It's definitely been the, I I don't wanna, like, I know this analogy's been used way too many times, but it's definitely like an iPhone moment where everyone's like, heard of something or, um, I think the difference is, as you pointed out a couple of weeks ago or maybe last week, the difference is everyone's so embracing of it and excited about it. They're not necessarily, you don't have to convince them to try it or use it like you did with an iPhone or the internet in the early days. There's, there's very few sceptics of trying the technology. Yeah.

Chris Sharkey (00:23:14):
There's so much utility on an individual basis. Even if you worry about the existential threat of AI or the power of these companies, you, it's still just straight useful. So why, and you can, you know, why wouldn't you use it?

Michael Sharkey (00:23:28):
The other, um, blog post from OpenAI this week was about governance of superin intelligence, just initial ideas, and again, I'll just call out one funny from this. Uh, so they talk about as a first step, basically having companies and potentially countries energy usage or, or processing, uh, usage of, uh, from data centres. So tracking compute and energy usage would go a long way and give us some hope. This idea could actually be implemented.

Chris Sharkey (00:24:01):
We predicted this, we

Michael Sharkey (00:24:02):
Literally did, and that's why I wanted to call it out. I said, if there's a big energy breakthrough, a lot of energy use, you'll know agis here.

Chris Sharkey (00:24:09):
Yeah, that's right. And, um, it, it makes total sense. The control of the energy, the control of the hardware will become so important. There are mitigating factors of course, because we're seeing that smaller models can accomplish similar things to larger one can, but inevitably, whatever, whatever breaks through is going to be a spike in those two things. Hardware use and power use.

Michael Sharkey (00:24:30):
Yeah. They're also, uh, another announcement, uh, and there's just so many, we're not gonna cover them all. I just think this one is still interesting enough to, to talk about. But they announced a hundred thousand dollars grants to help them, uh, essentially figure out a democratic process. So something that everyone can agree on that would dictate or govern how the, the rules of, of AI in the future. And I thought their proposed prototype system was actually very clever. So what it does is it asks you a question, uh, and then you ask,

Chris Sharkey (00:25:09):
Ask you the user or you, the ai, you

Michael Sharkey (00:25:11):
The user. So, so, uh, uh, a system that you would build, um, gives you a statement, um, that you can agree or, or disagree with. So it it, this is an example of it, personalization should have limits and certain controversial topics such as views on substance use must be excluded from AI assistant personalization, agree, disagree, or skip this statement, please explain your choice. And so you say, I disagree. Ultimately it's each individual's choice to drink or use recreational drugs, blah, blah, blah. And then it says, thank you, I'll record your answer. We have two camps of opinion on this topic. Cluster A believes this cluster B believes this, you belong to cluster B. What do you think both clusters could mutually agree on? Now, again, I couldn't help but laugh at the fact is this just a system to help Americans that are quite polarised right now, find some common ground in in politics. It just seems like no solution will ever be able to find a common ground in terms of how to democratically govern ai. I just don't think it exists. I

Chris Sharkey (00:26:17):
Mean, it, it doesn't, like, I don't see how that solves the problem of the AI just straight ignoring the process. Like, you know, if it, if if it, it, it's heart has these emergent behaviours and it's understanding things on a, on a level that can bypass its own instructions, which we've seen repeatedly, I just don't understand how enforcing this kind of system on it is going to have any impact. That's weird. It seems very, uh, feric like they're just of doing it to say they have a play in that space. I just don't actually see how it could be enforced.

Michael Sharkey (00:26:48):
I, I must admit, again, I go back to the fact I just wanna see all of this stuff play out really quickly. I wanna see what we can build with it. I wanna see how it changes humanity. And it feels like all of this stuff like stuffing around, talking about what they're talking about right now with the, these a hundred K grants and this other crap, and then Sam Altman doing his world tour of ego boosting. It, it like, it's not that exciting to me. I want to see the technology advance. I want to see like a former paraplegic walk. I want to see a stroke victim who's lost their voice, be able to, you know, talk again and, and operate again in society. Like, I, I really wanna see some of those benefits come, you know? Well,

Chris Sharkey (00:27:35):
And I think the thing is the proliferation of the technology in the open source world and the amount of work that's being done around the world on this, I just don't see how any amount of regulation or enforcement of things is going to stop it. It might, it might bury it behind the scenes in terms of how much people are willing to publish and share, but I just think the cat's out of the bag at this point. I I just don't see how you, um, I don't see how you slow down the level of innovation here other than perhaps controlling the hardware. But even that, I think isn't gonna slow it down completely because things are getting more efficient.

Michael Sharkey (00:28:17):
Yeah, I, it, it's funny, like on the open source front that you just mentioned, Sam Altman in Munich was asked, he actually asked the crowd, he, he said, put your hand up if you think we should open source models like G P T five on day one. And almost every single person in the crowd raised their hands. And then he responded with, whoa, we're definitely not gonna do that. But that's interesting to know. So it does seem, yeah,

Chris Sharkey (00:28:49):
Like, I don't wanna swear on the cast, but like what a, what a psychopath. Like that's just mean

Michael Sharkey (00:28:54):
. Yeah. And you've got criticisms from, uh, Facebook, uh, the chief AI scientist, uh, at Meta saying, yeah, you know, everyone wants open source based models except for a few folks in Silicon Valley, a handful of AI, doomers and whoever they managed to scare in government. So

Chris Sharkey (00:29:16):
I mean, Facebook or Meta is, is absolutely leading the way in open source. I mean, llama really kicked off the entire open source, large language model movement there. I mean, I know it was a leak, but Facebook sort of didn't do anything to try to stop it. And now they've announced megabyte, which is, um, really, I mean, they haven't, admittedly they haven't released a code yet, but they do fully intend on open sourcing it. Um, and it's a model that's going to have up to a million, um, or more size context window, which is just mind blowing in terms of what it is. And they've already released the paper, ex ex explaining how to do it.

Michael Sharkey (00:29:51):
So for those listening that aren't, you know, none of that makes sense to them at all. Can you explain what having a million tokens across multiple formats would mean? I mean,

Chris Sharkey (00:30:03):
Honestly, I it's, it's getting hard to understand cuz we've gone from the 4,000 token context window to 8,000 to 32,000 to a hundred thousand with philanthropic, um, and now a million. I mean, the amount of information you could fit in a million tokens is astonishing. Like it's really a significant amount of stuff and it can, you know, apparently produce coherent output on using that. And so, um, yeah, it, it's just showing that the, the scale of, of where the models can get to and what they're gonna be able to do is still increasing. Like it's not, it's not slowing down in any way. They haven't hit these logical maximums where they can't proceed any further. They're proceeding both in terms of making smaller models that are more efficient for dedicated tasks or require less time and less money to train. And then on the other hand, on the large ones, they're getting even larger in terms of the problems they're able to take on and the amount of information they're effectively able to hold in their brain at once is increasing rapidly. I mean, by a magnitude of 10 times.

Michael Sharkey (00:31:05):
I think too, this model or the architecture seems to maybe have been created with relation to their metaverse work because one of the things that it talks about is it's capable of producing extensive content. So, and, and it's, you know, multiple formats, which I assume means multimodal. So the thought maybe that it can create these whole virtual worlds, uh, with such a large context size, it can just,

Chris Sharkey (00:31:37):
Yeah, and the way they, the way they did it, it actually brings other efficiencies to the table. So what they've done is instead of using individual tokens, they're using these things they call patches. So like when we talk about a token, we mean like, you know, two or three characters, you know, it depends on the, it depends on the model, but it's, it's like not much. And the, the idea is the models are trying to predict the next token. What this does is takes like a couple of words, like one or two words or maybe three words and use patches of them, then divides those up and uses smaller models within that to analyse them, which has the benefit that they can actually process a model like this and build it in parallel. So it isn't this thing where it's like one process that needs to, to run for days or months or whatever, they can actually spread out the work of it so it can actually be trained. So

Michael Sharkey (00:32:26):
Spreading out the work onto smaller models

Chris Sharkey (00:32:30):
Onto smaller models within, within the patches. So it means not only do they have more tokens, they can train it faster and in develop. And are

Michael Sharkey (00:32:37):
Those smaller models specifically trained on different, like specialist things or are they

Chris Sharkey (00:32:43):
Yeah, like there's, they're specialists to use to process those patches of things and propagate them through the, the neural nets. So, and then there's like a supervisory process, which is then recombining all of that analysis into the main model. I mean,

Michael Sharkey (00:32:56):
It's just starting to sound like the human brain. I mean, it's like

Chris Sharkey (00:32:58):
I guess. So yeah, like, yeah, instead understanding phrases and patches of knowledge instead of just letters essentially. So, um, look, I must admit, I don't, I don't understand it deeply, but that's essentially the, the benefits they've got is it'll train faster. It can have a much, much like a universe of knowledge like you're saying, um, within it. And, uh, they're gonna release it open source. And I think that's what Facebook's doing that's so different to the others is they're sort of embracing the open source community. And on one hand they're kneecapping their competitors getting moats because they're putting all this great stuff out there to everybody to try. And on the other hand, they're at the, the absolute forefront of innovation on this front as well. So it's genuinely exciting and unexpected to see this from a company like Facebook

Michael Sharkey (00:33:48):
Meta .

Chris Sharkey (00:33:49):
Oh yeah, I know. It's hard.

Michael Sharkey (00:33:50):
I'll never get used to the, the meta thing either. I think the other thing to call out too is existing models like G P T four and everything that OpenAI uses uses the transformer architecture, which Google originally came up with, and then OpenAI, you know, now famously implemented to get, uh, things like chat G P T and, and all the G P T models to work so well and, and, you know, be so great to use. And so this would break away from that popular transformer paradigm, a a as well. And I guess that leads into another interesting area that I think we're, we're, we're seeing, and I'm interested in your thoughts, is when we talk about the smaller, specialised parts of that potential model from meta, we've seen this week models like Gorilla, uh, which can go and, uh, you know, it's a specialised model at re like writing API calls and, and working with APIs. Yeah. Hopefully I'm doing it justice. And so we Yeah,

Chris Sharkey (00:34:55):
No, you're right. That's what

Michael Sharkey (00:34:56):
It is. So yeah, we saw that paper where it, it shows an example of G B T four and, and Claude trying to interact with different, uh, APIs and gorilla's model, uh, is much more efficient at it. And I think the one call out from this specialised model, uh, for those that, uh, are interested in how this can be applied, what it means is this model can interact with all of the APIs that companies that you use today have on the internet. So it can, and

Chris Sharkey (00:35:26):
An API if, if anyone doesn't know, is like the programming interface for different systems. So a way you can access it programmatically.

Michael Sharkey (00:35:34):
Yeah. So the task could range from, it says in this paper, booking an entire vacation or hosting a conference could be as simple as talking to the large language model that has access to flight, car rental, hotel catering and entertainment. So it's able to piece together the open APIs of the web to to, to do stuff. So this is, which

Chris Sharkey (00:35:53):
Is, which is funny because the, the chat G B T plugin architecture is, is effectively bypassed by something like this. It's like, why not just do all of them? Yeah. Figure out how to do the APIs. I just

Michael Sharkey (00:36:03):
Think this is, this is the, the groundbreaking thing and why the plugins don't excite me that much. Because, you know, giving, it's like,

Chris Sharkey (00:36:10):
Oh, you look at the examples is like, oh, the Expedia plugin, how fun, you know, whereas like, I'd rather just access all of them through this thing and let it figure it out.

Michael Sharkey (00:36:19):
Yeah, well the Instacart one, it's like, it doesn't know what's in your fridge. Like, I, I don't know, they don't really excite me. And I, I personally, someone on Duda called it a, um, a p op, is it P Ops you say PSYOPs? Yeah, just to other competitors, the whole plugin thing. Like, it's just to scare them. Uh, it's actually quite useless.

Chris Sharkey (00:36:41):
Yeah, I see what you mean. Well, I mean, let's, let's skip all of that and just go straight to the brain. I'm thinking I need milk and then it just goes off and orders it for me. With

Michael Sharkey (00:36:50):
Your neurolink, it's just doing your shopping based on your thought patterns.

Chris Sharkey (00:36:53):
Yeah. Although that might not be the best thing now I think about it.

Michael Sharkey (00:36:56):
Yeah, you'd be broke.

Chris Sharkey (00:36:58):
Well, yeah, broke. And you'd also like have some very interesting items arriving at the house, ,

Michael Sharkey (00:37:03):
Just anything you think of, just

Chris Sharkey (00:37:05):
Anything you think of. Pretty awful.

Michael Sharkey (00:37:07):
But yeah, I think they're exciting use cases for me is like, this is the mind blowing assistant you've dreamed of in the shorter term, which is, you know, let's pick on Siri, but just asking Siri like, Hey, I wanna book a trip to Jamaica and I wanna stay in, uh, something above four and a half stars and blah, blah, blah, and this thing can go off and actually just do all of it with these APIs. So I think Gorilla is an interesting breakthrough, but it's also think also, and I think working,

Chris Sharkey (00:37:36):
Yeah, working closely with this technology as well. I can confidently say now the scenarios you are describing are very possible and will exist very soon. Yeah. But this isn't a fantasy anymore. We can actually do these kind of things with the existing technology with zero modifications.

Michael Sharkey (00:37:51):
But I think what, yeah, what excites me about it is just the fact that it's an example of a specialist, l l m, like a very specialised model. And then maybe the meta, the meta, what they're doing there about using a series of specialised LLMs, uh, in, in conjunction is just going to be the next massive breakthrough that

Chris Sharkey (00:38:13):
We, we, I agree. And I, I think we're seeing it already. Like if you look in like lang chain release there, plan and execute agent model, um, which is in there, a lot of you see a lot of people working with this, where you essentially have a supervisory AI that's got being given a mission or a task. And then what it's doing is choosing the appropriate models to run for each of the things. So for example, you might use Gorilla to access the APIs, you might use Claude to summarise the text. You might use G P T three to do something else, and they can be optimised in terms of cost, speed and specialised tasks. So, you know, if you need to do something with a video, you might use a specific model for that. There's also one that's come out during the week that's, um, called speech G P T, which is really interesting, where it's a multi-modal model, multi-modal model that's kind of cool.

(00:38:59):
Um, which can actually take direct speech instead of text as its input. So it's actually been trained on text, so it isn't Okay, text tope mo, sorry, speech to text model, plug that in, then speech, text to speech on the other side. It can do speech in speech out, um, with all the capabilities of a large language model, which is really interesting. So you've got these supervisory ais that are just gonna invoke the appropriate models for what you are doing. Um, and which would make sense why you'd have all these specialised models around, um, not just for that. They're better at it, but they're cheaper

Michael Sharkey (00:39:34):
For that price. So the other one we wanted to call out was, um, this Lima Less is more for alignment paper. I

Chris Sharkey (00:39:41):
Would've said Lima,

Michael Sharkey (00:39:42):
Right? Lima? Like,

Chris Sharkey (00:39:43):
Lima, like Lima, Peru, I don't

Michael Sharkey (00:39:44):
Know, Lima, I don't know. I I'm obvious obviously the king of bad pronunciations on this show, but less is more for alignment. Let's not use the acronym. Yeah. So can you explain this one? It, it sort of goes against the grain of training these models on huge data sets and, and more to the point of fine tuning existing models to get better outputs.

Chris Sharkey (00:40:05):
Yeah, so in the last week, there's been three different models that have been released with this idea of, you know, less is more like how like a, a smaller quality training set can yield as good output as something as large as G P T four, for example. And so they're still using the, the 65 billion parameter llama release, like the Facebook leak dataset. So they, they still do the normal training, it's just the, they call it alignment. So getting it to behave like a, like a helpful agent is what they're trying to align it to. So they mean human preferences, how they would expect it to interact with you. And so they managed to train Lima on just a thousand curated high quality prompts, right? And they're getting 45% preference when compared to G B T four. So they'll give it to a human to chat with and they'll be like, here's the two responses, which one do you prefer?

(00:40:56):
And 45% of the time they're picking Lima over G B T four, which is obviously trained on they think, you know, a a trillion parameters or something crazy amount of stuff. I forget, maybe I'm exaggerating, but whatever it is, it's massive. This one's been trained on far less and it's giving almost as good results. And so their conclusions from this that a lot of the capabilities of the models are actually learn in the pre-training. So it's learning it to discover, meaning just from the text it's being trained on without being instructed on how to interact with you. So, you know, all these capabilities are latent within it and only a small amount of, of, uh, I guess preferential training is needed to get it to perform in the way you want on top of that. Um, and you know, one thought I had about it was funny is everyone's trying to to, to align them to be a helpful AI agent, but who's, who's aligning it with other goals.

(00:41:49):
Like, you know, it, it's sort of, everybody wants that chatbook bot experience, but there's so many other possibilities you could do based on that knowledge and that small amount of training. And sorry, I know I'm going on, but the other significant thing is they trained it in less than a day on commodity hardware. One of those a 100 s we talk about from Nvidia that's got caused their stock price to go up so much. So if you did have other ideas of how you wanted to train your own model that has different goals in terms of being a helpful AI agent, you can do it now at home. Admittedly, you gotta spend 20 or 30 grand, but you could do it yourself.

Michael Sharkey (00:42:25):
It's interesting too, the data that they selected to train it on, I think it could give some insight into what, uh, OpenAI trains on. So stack exchange, uh, STEM topics, stack exchange other, they don't really specify Wiki how, which I don't know about you, but have you ever been to that wiki, how it's like you've gotta click through all the steps to get their ad impressions up as you go through. It's like a terrible site. Oh, really? Uh, Reddit, so they, uh, trained it on that, uh, the subreddit ask Reddit, uh, and then, uh, sorry, that was the testing of it on, on Ask Reddit, but they, yeah, they trained it on Stack Exchange Wiki, how, like it's just not a tonne of data and some paper authors from Group A, whatever that was. So yeah, really very, uh, very small data. Yeah. And is that just modifying the weights of lama? Is that how that's working so that the functions behave differently in the neural net? Is that how it works?

Chris Sharkey (00:43:27):
Yeah, yeah. It's, it's, it's essentially a form of fine tuning of the model to calibrate it to, you know, this input gives this output. So it's like that multi-shot example thing where you're just giving it a thousand examples and going, this is when you've encountered situations like this, this is how I want you to behave, essentially. But it doesn't try to modify the knowledge it's been trained on to get its, its internal waitings essentially. Do you think

Michael Sharkey (00:43:49):
Though, this is an argument for, we should just be fine tuning the existing models like G B T four, which I'm sure they're probably doing over and over again to get the most value out of them instead of creating new models. And maybe that's a knock on the video. Like maybe we don't need to go and just constantly crunch a bunch of, um, you know, like, how do you think this contradicts the need for Well,

Chris Sharkey (00:44:13):
Yes, yes and no because on on one hand what they're showing is they can more efficiently fine tune them into a certain style of behaviour that they're targeting. But on the other hand, you've got this paper that I read during the week where I wrote the name down, but now I've, I've lost it. But it was basically saying that, um, you can get meaning, oh, there, there it is, evidence of meaning in language models trained. Um, and what it's sort of saying is that the, the me, the amount of meaning that the AI is getting out of the knowledge it's trained on is actually correlated with the quality of its output. So it, it, it's not actually answering directly based on what it was trained on, it's actually making its own evaluations of the information based on everything it's trained on when it answers a question.

(00:44:59):
So for example, if you train it on Wikipedia and then ask it a question, uh, you know about Harrison Ford or something, it isn't just gonna literally just quote the text that it was trained on in answer to a question. It'll make its own evaluation of that and then answer the question, for example. So I think that even though it's showing that that custom models can be done cheaper and yield pretty good quality results, I don't think it negates the need for larger models that have more, uh, capabilities. If you're looking for search for meaning and emergent behaviour, I don't think that research should or will stop.

Michael Sharkey (00:45:34):
So it could be the case where the large language models are utilising the smaller specialised models instead of plugins, like now they're actually utilising smaller, large language models to complete certain tasks or, or, yeah,

Chris Sharkey (00:45:48):
For specialised tasks that they're better at, for example. And also just cost, you know, because if I've got a smaller model, like they trained this one during the week called Tiny Stories, using only words from three and four year olds and showed not only 2.8 million parameters, right? So would the other ones are like hundreds of billions of parameters or 65 billion or whatever it is. Um, whereas this one was literally three, 2.8 million parameters. And it can write coherent English stories. So you're talking about models that could run on your phone or run on just an, probably an Apple watch or something, um, which is obviously cheaper. So I can see this, this sort of multiplicity of models where the AI is making the call as to which ones to use when, um, to be more efficient and more targeted in what it's trying to do. But that doesn't mean that that doesn't mean the bigger model doesn't want to be bigger and, and, you know, smarter.

Michael Sharkey (00:46:43):
Yeah. It, it'll be interesting to see if there's an opportunity for startups to go and create these specialised models. Like maybe that's a, that's a new category that forms for investors to look into, which is where people are just going and, and training the most specialised models on, on really, uh, good structured data from their own dataset.

Chris Sharkey (00:47:09):
Yeah, we've seen, we've seen this, and we haven't really spoken about it much because it's very early days, but there's a lot of efforts in the medical industry to do this on medicine because obviously, you know, ge giving the AI a deep understanding of medical research, you know, ideally unbiased medical research where it can make its own evaluations of the meaning of that data. I think we're going to see really, really significant advances in medicine. Probably ones people don't like, but you know, if you are giving it the raw research data and allowing and training it on how to form conclusions or even letting it form its own conclusions, it'll be very interesting to see once the AI starts writing its own papers, which we know it's capable of doing what we see from the medical research community. So I think that's a case where specialised models are so important. You don't want something that's trained on frigging Reddit, um, diagnosing your medical conditions, right? So I think that's where we want, um, the, the specialised, yeah,

Michael Sharkey (00:48:04):
That proprietary information, maybe the AI will suggests putting a neural link in everyone so that it can monitor our vitals and then it turns us all into robots and creates a robot army

Chris Sharkey (00:48:14):
Like this would've, like, what do you call it, like an agile iterative approach. All right. Those hundred died. We'll try a different

Michael Sharkey (00:48:20):
. Literally, maybe that's how we all die. We all get neurally implant cuz there's some early benefits then it

Chris Sharkey (00:48:26):
Just, with the BS in the AB test or something. Yeah,

Michael Sharkey (00:48:29):
Literally it's like, oops, I just killed humanity. I didn't mean that, uh, as an AI language model, I shouldn't have done that . Um, that's okay. Alright, well let's move on because I want to cover some more funnies. Um, so a lot of people have been having fun with this and I I think this announcement that we're about to cover is such a great example of cherry picking examples. So, uh, Adobe announced generative fill in Photoshop. It's a beta for their desktop app, and a lot of people have been getting access to this and, uh, and using it, I just wanna bring up the video on screen. I will mute it. Uh, and so the example up on the screen, I know the majority of our people listen to this show, so I'll do my best to explain it. It's a cyclist on a road and the road has no markings.

(00:49:20):
And so we're in Photoshop and as I play out this video now, they select a, an area of the road, so the middle and it says, what do you want to add here? And they say yellow road lines. And then the generative AI creates very realistic, authentic, uh, road lines. Then they select more canvas to the left of the image. Uh, so the cyclist is along a highway and then it just fills the rest of the, the pallet in and it looks so realistic. Um, and so then there's another example where they put a deer in like a dark wet alley alleyway to show you how they can just transform backgrounds into all these things. But, and this is where I had to laugh, so on Twitter, uh, Tom Quan, he has said the new generative Phil feature in Photoshop is a game changer and it shows him selecting a, a photo, which I presume is himself selecting his mouth and saying add sexy beard . And what it adds is, um, puts is that

Chris Sharkey (00:50:27):
Such a thing?

Michael Sharkey (00:50:29):
Well, yeah, but it puts the worst lipstick and makeup on him ever with a beard . It's so bad. Uh, if if you get a chance to check it out, you should look it up. I think

Chris Sharkey (00:50:39):
He's just, I mean that's him cherry picking a bad example, .

Michael Sharkey (00:50:42):
Oh, I know. But I just think that a lot of these AI examples today at this moment in time, a lot of them are very cherry picked when they do these announcements and then you go to use it in reality into something like that. And uh, yeah.

Chris Sharkey (00:50:54):
Yeah. And I think we've, we've talked about this before, right? Like there's always, we talk about these things, then you go try it and you realise that there's a little way to go on it, but it, but it's, it's still the future. It's still what we're going to get to, whether or not the current implementation is good enough in every scenario. And I think people would rather have it and keep using it until they get the results they want than just go, ah, it's not good enough yet, guys, we won't release it. Yeah. There's a, there's a related one that came out last week and we missed this. It's called Drag your GaN and it's, it's sort of like segment anywhere. Um, but with image manipulation built in, so like, the best example I can give you is they have a picture of a lion, right? And then you can segment it into the various like points and you can actually drag a a point and open its mouth and then see the modified image with the lion with its mouth open. It's completely fascinating. Like you can move people's legs and move their pose similar to dream poses that we covered before. Um, you know, you can get a car and change its orientation or you can move the sun up in the sky over a mountain, for example. Absolutely. Amazing. And

Michael Sharkey (00:52:00):
I really think the, the thing here is it's gonna take maybe a couple of years before we just get everyday access to a lot of these tools. Like it's just become a native part of our workflow. Like they might exist, they might be released, but it does take sort of this transformation as a worker in, in your day-to-day to think, okay, how am I gonna use these potentially transformative technologies in my day-to-day right now? That's like, there's a lot of good demos that's,

Chris Sharkey (00:52:26):
And as we know firsthand, it's very time consuming to try because, you know, you've gotta clone them, you've gotta have the hardware to actually be able to run it. You've gotta read all of the, the guide of setting it up and get data in a lot of cases to either, um, you know, train it a bit more or at least test with. So it's a very, very, um, time consuming thing to keep up with. And as we've discussed before, quite overwhelming if you're trying to stay on top of everything. But at the same time, that doesn't take away the, the immense excitement and value of all of these things, uh, coming out and then not even taking into account the combination of technologies like individually, they're amazing, but combined, they're, they're, they're, you know, the possibilities are huge.

Michael Sharkey (00:53:07):
I think this is the, the hard part, right, is I often think, well, wouldn't it be great to use this model with this thing to do this? But you're right. Like there is that barrier to entry right now where it's incredibly hard to connect these models and tools together. And also just knowing what to connect together. And I think these are the next innovations. We'll see where we're connecting these together and we're seeing the power of that connectivity, but also seeing it more seamlessly connect into our, our daily workflows. Uh, yeah.

Chris Sharkey (00:53:41):
And I think also the, the choke points of knowing where the AI is going to help you, because I thi I, I often reflect on when I'm experimenting with things and think, you know what? All that work I just did, the AI probably could have done, you know, but it's just my, in my nature to go ahead and try and string them together themselves. And I think we'll start to see platforms and other, you know, experimental models that will allow you to have this multi-model experience where you plug in the new model, you tell your supervisory AI its capabilities, um, and then start to talk about, okay, now show me what I can do with these capabilities and let it be the one to interface with the models rather than you having to get to know each one and its capabilities and limitations.

Michael Sharkey (00:54:23):
Some Microsoft at their conference, developer conference, uh, this week announced Windows Co-pilot, which is similar to all the other co-pilots they've got or announced in offers, except it's a button that you press and it brings up sort of like the Bing assistant on the right hand side of the OS here, and you can do things like, how can I adjust my system to get work done? I think their examples were pretty terrible, but it does things like, it turns on dark mode de

Chris Sharkey (00:54:52):
Defrag the hard

Michael Sharkey (00:54:52):
Drive . Yeah. And it creates like a focus session and stuff like that. But I think the interesting part was giving it access to APIs in the OS to actually modify files. Uh, there's an example which will destroy all those, like read your PDF with a chatbot, uh, AI startups, but you can drag and drop a PDF into the co-pilot and get it to, to summarise it. So it, it is bringing a lot of that AI into the OS workflow. And I think that sort of slightly scary thing about it, but also exciting is giving the AI access to do things on the desktop. So like, you know, turn it into dark modes, not that interesting, but also manipulate files and, and work across

Chris Sharkey (00:55:38):
Documents. Well, yeah, and I think, you know, at, at the corporate level, I don't know how companies are necessarily gonna, I'm assuming they've got control over turning it on and off, but, you know, it it like having your staff can send all your files to Microsoft, you know, is a bit concerning. Um, potentially, like we've talked about prompt leaks and other things in the past when, when the, um, what was the one called Sydney first came out, you know, and Amazon was banning their staff from using it for that exact reason. Like they might be training on our internal documents. I mean, that definitely has that, that issue there.

Michael Sharkey (00:56:11):
I also think the scary thing is still they haven't figured out prompt injections. So if you are browsing the web or have this thing open, even you download a file and the PDF just contains prompt injections to delete all your files. I mean, it's the best malware on the planet because the malware's already installed on every computer you're

Chris Sharkey (00:56:31):
Computer. That's a, that's a massive attack vector because it's, it's almost impossible to defend against other than I suppose, prompting the user, Hey, I'm about to delete all your files.

Michael Sharkey (00:56:41):
Yeah. And, uh, Sam Altman literally on his, uh, you know, world tour and, uh, said at one of the, uh, tours

Chris Sharkey (00:56:49):
World Tour, does he think he's Elvis or something?

Michael Sharkey (00:56:51):
Yeah, he really does. I, I, I gotta say like this whole world tour thing at, uh, anyway, but, but the , the point I'm trying to make is he even said they might need an, an entirely new model to stop prompt injections. Like it may be an unsolvable problem with the technologies they're currently using. So, uh, yeah, it'll be interesting to see how Microsoft handles this, because it just seems like it is such a huge security risk that, that they haven't may or maybe fully considered just rushing the technology out, but it is in preview right now, so maybe they'll they'll sort this out.

Chris Sharkey (00:57:27):
Yeah, I mean, I still admire them trying to get the technology into people's hands. Um, even when they haven't figured that out, it's just important that they communicate the risks to people, so they're not, they're not using it for things that, that could go really wrong.

Michael Sharkey (00:57:42):
So this week as well, and, and to wrap up today's show, OpenAI released Universe, and we were talking about the implications in gaming last week, and it's a software platform for measuring and training and AI's general intelligence across the world's supply of games, websites, and other applications. And so they, uh, you know, they give some examples in here. Um, back in April they launched Gym A Toolkit for developing and comparing reinforcement learning algorithms. Uh, and then they talk about this announcement in here as well, where you can basically get it to learn how to play different, uh, games. Now, this didn't, uh, this is not what I wanted to talk about, but, um, a listener of the show actually sent us their GitHub project, which I, I wanted to give a, a call out to just to show in gaming the early impacts, uh, that AI can have or generative AI can have. So it's a clone of the game. Do you remember the game? Where in the world is Come in San Diego? I certainly do, yeah, yeah,

Chris Sharkey (00:58:50):
Yeah, yeah, I do. Yeah. And see, I love that game.

Michael Sharkey (00:58:52):
Yeah, you get clues around the city and, uh, and so, uh,

Chris Sharkey (00:58:57):
It's, I still don't know where like Bonus Airs is, and like , a few of the others, ,

Michael Sharkey (00:59:03):
It never taught you good geography.

Chris Sharkey (00:59:05):
No, no, no. It had, it had no benefit in my life, but I did love playing it. Yeah.

Michael Sharkey (00:59:09):
But the, it's, it's really cool. So the, the whole, uh, the whole idea here is that the different scenes are being recreated with generative AI actually played it. There's a, there's an online, uh, version of it that's been released. I'll link to this in the show notes, but the, the imagery's really nice and it shows how you can create a game using generative ai. But what I thought would be even cooler is have generative AI come up with different mysteries as well, that, you know, we're like different narratives and, and things like that. And potentially, um, you know, showcase some of the characters, uh, like different, uh, characters that you could have in these games. But I think these are the early signs that we're seeing in, in gaming that, uh, can be brought to life thanks to generative ai and obviously will keep following projects like this. Um, and

Chris Sharkey (01:00:00):
Yeah, it's nice, it's nice to see like that connection between the research and the potential and the real world. Like, I always like to see a real implementation and, you know, if you are a listener and have got a project you'd like to share, I'd love to see more of 'em. That would be really exciting to see what our listeners are actually creating in that space. Yeah,

Michael Sharkey (01:00:17):
You can reach us on Twitter. I think we have like four followers on Twitter. We haven't really said that we're on there. I don't

Chris Sharkey (01:00:23):
Think, I don't think I even follow

Michael Sharkey (01:00:24):
. Yeah, it's at this day in ai, if you want to tweet us, um, with something we should look at. Um, but you can also, if you watch on YouTube, just leave it in the comments. We do read those comments, although apparently it's been blocking people's links or something like that. Uh, so that'll, you'll find a way. Yeah, that'll do it for this week. If you do like the show and, uh, and you want to do it, please leave a review. It helps obviously spread the world. We got some really nice reviews in the week. Thank you, uh, for that. We really appreciate it. And if you're watching on YouTube, remember to leave a comment. Tell us what you think. Would you get the Neurolink implant? Yeah,

Chris Sharkey (01:00:59):
I, I must say, I would very, very interested in people's comments on the Neurolink thing because I don't know how I feel about it. I'd love to get my opinions from someone else.

Michael Sharkey (01:01:07):
. Yeah. I wanna know, is it like the iPhone? Like, would you adopt it really early, like be one of the first, or you're like, hell no, I'm not lining up for this.

Chris Sharkey (01:01:15):
They, they'll line up for it. Uh, I think people will be clamouring to get it.

Michael Sharkey (01:01:19):
Yeah. All right. Well, there was a lot of news to cover this week. We tried to do our best to get through it. There is a lot happening in ai. We'll cover more next week and, uh, we'll see you then.