This Day in AI Podcast

In this episode we cover OpenAI's new update to hide chat history or accept they will train using your inputs, we explore doomsday scenarios and cover Max Tegmark's TIME article on "The Don't Look Up Thinking That Could Doom Us All", discuss the explosion in Vector database funding and what it means and learn how large language models get anxious!

CHAPTERS:
====
00:00 - What have we done!?
00:17 - OpenAI Chat History, Privacy & Training GPT-5
09:45 - Where is GPT-4 with Images? & OpenAI Enterprise Deals
12:52 - Max Tegmark's TIME article & Superintelligence, Doomsdaying & Fear
29:23 - Anxiety in AI Models, AI "Emotions" and Motivation
37:33 - AI Distribution: How Much is AI Changing our Lives? AI Hype Cycle
46:01 - Vector Database Funding, Increasing Prompt Sizes, Scaling Transformer to 1M Tokens & Beyond Paper
55:25 - VCs Struggling to Know What & Where to Invest in AI
1:00:39 - Segment Anything in Medical Images & Medical Funding
1:03:20 - Weed Zapping AI Stopping Pesticide Use

SOURCES:
====
https://openai.com/blog/new-ways-to-manage-your-data-in-chatgpt
https://twitter.com/gdb/status/1651306937991168002?s=46&t=uXHUN4Glah4CaV-g2czc6Q
https://time.com/6273743/thinking-that-could-doom-us-with-ai/
https://twitter.com/fchollet/status/1650738654061658112?s=46&t=uXHUN4Glah4CaV-g2czc6Q
https://twitter.com/aisolopreneur/status/1646928323363102734?s=46&t=uXHUN4Glah4CaV-g2czc6Q
https://twitter.com/sterlingcrispin/status/1650320043107061761?s=46&t=uXHUN4Glah4CaV-g2czc6Q
https://www.pinecone.io/learn/series-b/
https://arxiv.org/pdf/2304.11062.pdf
https://arxiv.org/pdf/2304.11111.pdf
https://arxiv.org/pdf/2304.12306v1.pdf
https://twitter.com/Rainmaker1973/status/1649743415549067267

If you like this podcast please consider leaving a review or sharing with a friend.

What is This Day in AI Podcast?

This Day in AI Podcast is a podcast all about AI. It's an hour-long conversation on the influence and rise of AI in technology and society. Hosted by Michael and Chris Sharkey.

Michael Sharkey:
You literally say don't teach it to code. Oops, we've done that. Don't connect it to the internet. Oh, we did that as well. Don't give it a public api. Yep. Done that as well. And don't start an arms raise. Well, we're in an arms raise. Okay. Welcome back to this day in AI podcast. We are up to episode 12 and Chris some fairly breaking news from open AI saying that they will now give users the ability to turn off chat history and revealing that if you turn off the chat history, they won't use it for future training.

Chris Sharkey:
That's implying we are definitely using everything you copy paste in there for training and have been the whole time. Yeah, I,

Michael Sharkey:
It's hard, like, honestly it's getting really hard to trust these guys because everything we speculated early on when we first started this show about how all the, the usage of chat G P T was just training some super intelligent beast G P T mode from open ai. A lot of it's coming to fruition. Some of our co conspiracy theories.

Chris Sharkey:
Yeah. All the leaking of prompts and leaking of data and things that, you know, Amazon and other companies were warning their staff about was 100% correct. And uh, and now they've proven it and now they're trying to monetize it.

Michael Sharkey:
Yeah. So for people that didn't follow that early on, what was happening is a lot of large corporations were saying don't cut and paste proprietary information into chat G P T, even if you're paying for a paid subscription of it. Because ultimately that could mean they use that data in future training, which now it's very obvious they do. And Sam Altman came out really early on and said, don't worry with the api, we're not using any of your data for training, so everything's fine. But they never directly addressed, or at least to my knowledge, never directly addressed chat g p t or said anything about it. So it was almost lying from a mission by saying nothing about it. And the whole time they were using it for

Chris Sharkey:
Yeah, it's the old thing that, you know, if you are not paying for it, then you are the product. Your data belongs to them the second you pasted in there. And because everyone's using it for their jobs, they really are putting very sensitive things in there regardless of the warnings against doing that.

Michael Sharkey:
The other, the, the other interesting thing is to your point, just then people are paying for it and they're still using it to train. And, and the other crazy thing you've about is even if you turn the history off, and I've got this example up on the screen now, for those that watch on YouTube, it says, chat history is off, chats won't be saved in your history or used for training our models to improve chat G B T. So it's literally like sort of, you know, you lose the history, which is the best feature in the context of the conversation and your save past interactions unless you are willing to give them all of the data that you're inputting into chat G P T. So to me, if they, if they were, you know, kind about this or, or that they're, I, I'm trying to think of the right word. Like their, their nature was good here. They would allow you to keep Chad history, safety. I mean if

Chris Sharkey:
Safety their priority, like they, they keep saying that safety is their priority, but it isn't because if it was, then this would've been the the way it is naturally. And then you would've, you know, they would've asked to train on the data. Right.

Michael Sharkey:
They just don't understand why more people aren't up in arms about this. This was announced a couple of days ago and I, I'm, I know on Twitter there was a, a lot of conversations from people saying, you know, wow, I think people, a lot of, a lot of people assume they were training and, and potentially it was in their terms, I'm not even sure, but to me this seems like, you know, this same generation that used us as a product with, with social media sites like Facebook and, and trained on that data. They're just doing it all over again with chat G P T using our data to, to train more powerful ai.

Chris Sharkey:
Yeah. I I think I, I, my theory has always been, I don't think people mind that much about their data just being taken. I think if they did, there would've been a lot more laws. And I know in the u in the EU they've got GDPR and some people do take that seriously, but I think generally people just don't care. Which is why they're not up in arms about this. Like no one, even people just assumed it was happening and didn't worry and just used it anyway.

Michael Sharkey:
Yeah. It it, it's interesting now thinking what kinds of information people have pasted in here that they could now train off. I mean, it it like, think about the daily active users of this thing. I think they're at like a hundred million daily active users or something crazy. Yeah, that's

Chris Sharkey:
A good point. It's not like just this casual thing. It's like this is at scale. They've probably got all sorts of things in there that, that are totally unique. Like and proprietary, like you say, it's, this is private information that they can't get by crawling the web. It's actually probably a powerful tool in their arsenal for training their models over open source ones and alternative ones. Cuz there's no way they've got any of that.

Michael Sharkey:
Why do you think they did chat G B T in the first place? Like clearly they, they probably had a feeling that it might, uh, you know, blow up and, and get pretty big and people would use it. But I'm sure they couldn't have predicted the success it's had. But I think the original motivation, if I'm that team is, well how do we get, get on this path closer to an agi, a more powerful ai? Okay, we'll just get free training data from everyone on the internet playing with this thing.

Chris Sharkey:
And remember if you look at data sets like Dolly, um, where they got 50,000 human created prompts where they use G P T 3.5, like chat G P T, where that had real humans asking questions, getting its answers, and then using that to train the strip down model. If you think about that as an effective tool to improve a model, having millions or hundreds of millions of people using chat G P T and then seeing how they respond to those answers could actually have given them a huge asset training the future models. So yeah, I hadn't thought about it like that before, but it's actually quite profound what they had with that or still have with that.

Michael Sharkey:
And potentially this is sort of how they compete longer term with Google. Once they finally get their act together with all the data that they have in, in Google photos and people's documents and all their searches.

Chris Sharkey:
Gmails are on, like for years they've used that Enron, you've heard about that, right? Like when Enron shut down because the government, um, got access to all of their stuff. All of the internal emails that were on Enron's email server were made public. So that's been used as a training data set for years. It's part of G P T four and all the GPTs and it's been used forever on that stuff. But imagine Gmail with their access to everybody's emails. I mean, I don't know if they said they wouldn't do that, but who trusts these guys anymore? Um, so that's gonna be a hu I mean that's gonna be the biggest one ever.

Michael Sharkey:
Yeah. And it's also really hard to prove what data they trained it on because they don't reveal it or tell you or cite any form of source. So

Chris Sharkey:
Although people can do data exfiltration attacks where they can show, like I know with, um, co-pilot, GitHub's co-pilot people were showing that their exact code could be exfiltrated from the model to show that it was trained on that. So I think it will be possible to show if, if they do use Gmail in when they get up to that stage.

Michael Sharkey:
Yeah. It just seems like maybe this, there, there's a lot more to this than, than we've thought originally in terms of how do we build a competitive high quality training data set for future versions. Oh, you know, we'll release it publicly and allow people to chat to it and then people might even pay for it and then when they do, we'll get plugins so we can access even more of their data for training, like to me.

Chris Sharkey:
Yeah. And that, that's gonna extend as well once they have the image part of it, you know, where people are up uploading and, and using the image recognition part or inference part of G P T four, they're going to get additional data there too. Like a lot of the literature at the moment is saying there has to be a new paradigm of models to get to agi. Like large language models isn't enough, it's gotta be multimodal and things like that. But this really like getting access to more and more authentic training data that isn't just reading like Wikipedia and books and stuff is, is important and they've got that.

Michael Sharkey:
Stephen Wolfrem, uh, talked a lot about building the Wolfram alpha plugin for chat G B T and of course they haven't really released plugins and made them wildly available yet, I'm sure because there's still some risks associated with them. But even I look at the wolf from Alpha capability, adding that into a chat, uh, G P T conversation. This is just another way they can take the way, you know, currently G P T four is pretty bad at at maths, like it's not that great at it in certain scenarios. And so if you plug in more from Alpha and then it's training off that data and all people's interactions with these plugins, you know, that's even even a way that they can sort of patch that on. Well,

Chris Sharkey:
Presum presumably giving it the ability to do calculations in a reliable way. Like that's probably what the plugin does, I assume.

Michael Sharkey:
Yeah, yeah. Um, but yeah, it'll be interesting to see how this all plays out because you know, my trust in connecting plugins with my personal data in them is now zero. Like I'm not going to connect anything Yeah. Uh, outside to where they tell me that this is protected, which they're telling me is through the API where, uh, you know, a developer's using it in their product and the a as a result of doing that, that information is not saved or, or u or used for future training.

Chris Sharkey:
Yeah. It just reinforces what we were talking about last week, the, the vital importance of the open source models continuing to grow and get support and funding where needed and definitely people's attention because it's a, it's a valid alternative to this that actually can handle your privacy. So

Michael Sharkey:
The other thing I wanted to touch on just while we're on open AI is why do you think we haven't seen the image capabilities from G P T four yet? Like is it because they can solve captures on the web and that freaks them out that it might, you know? Yeah, I,

Chris Sharkey:
I saw that actually that it's reliably solving captures now. Um, I don't know because the thing is we demonstrated last week that there's already, um, the web, what's it called, the web diffusion one where you can actually do it already. So like you can do image inference already with an open source model just fine and it works pretty well. So, um, I don't understand their reluctance to release it. I mean, I assume it's a lot better than, than what we've used, but I don't think the risk is any more risk than they're currently taking to release that. Like, I, I don't know if it's a risk thing or a scale thing or what it is. I I genuinely, this is the first time really they've announced something and not made it available to at least people on an invite only basis. I mean, I don't know, maybe some people do have it, but I certainly seen any evidence of people showing how cool it is.

Michael Sharkey:
There was that app for visual impairment that they demoed originally when they announced G B T four and presumably they had access to it and potentially now there's a, a number of companies who do have access and are building solutions and they plan to reveal them and, and make it more wildly available. But it does seem like there must be some safety concerns there that they have continued restrictions. They haven't given the larger token sizes and you know, that image capability is still not available. It

Chris Sharkey:
Could also just be that they just don't have enough personnel given their growth to focus on so many things at once. And they might be focusing on their corporate area more to justify their existence and, you know, working privately with companies to use the technology there before they worry about the unwashed masses and releasing it mass again. Like they just might not be the motivation for them to get it out. They've already got the attention, they've already known as the leader, maybe they just don't need to do it right now. And so it's, it's, it's just as simple as that. They're just not ready and so they're not doing it.

Michael Sharkey:
Yeah. We saw pwc, which is obviously a large international consulting firm, invest a billion dollars in ai. These are how big these deals are, a billion dollars. Yeah. Uh, to use G P T for and Azure open api, uh, AI service. So it sounds like that might put a lot of pwc, uh, consultants, accountants, potentially out of work in the future that they're willing to lay so much money into it. But it does seem like you said they're focused on these enterprise deals.

Chris Sharkey:
Yeah, I think so. And it makes sense for them, um, because the money's there, it justifies their continued research, it allows them to get access to these private data sets. I mean, surely there's agreements around privacy with that stuff, but who knows, it's definitely gonna be its own infrastructure that's separate from the main one. But, um, yeah, I think maybe that's just it. They're just focusing on that bit for a while and um, the public staff has served its purpose and they just don't need to keep continue to go down that path at the moment.

Michael Sharkey:
So I read the article in time like many people did this week. I'm gonna link it in the show notes, so if you haven't read it, I really encourage you to read it. It's by Max Tegmark who wrote Live 3.0. He's also a professor at m i t doing AI research. So he definitely knows his stuff when it comes

Chris Sharkey:
To Yeah. And like tho those guys definitely, cuz I read that book around the time it came out and, uh, you know, they must be just like I said all this guys, I said all this, you know, like, and now everyone's talking about it or not talking about it as he points out. Yeah.

Michael Sharkey:
So he, he, in this time article, he talks about comparing the potential super intelligence kill us all AGI scenario, uh, or super intelligence scenario to the movie. Don't look up where if, if you haven't seen that Netflix movie, an Asteroids Coming for Earth, and instead of them focusing on stopping the asteroid, there's all these personalities and, and media, uh, people saying, you know, don't look up and trying to deflect away from it. And no one pays attention to the scientists and the people that know, uh, how to stop this asteroid coming to earth. Yeah, yeah. Uh, that's a pretty poor description, but that gives you a summary of, of the the film.

Chris Sharkey:
Yeah. I think the quote he used that sums up really well is don't talk about an incoming asteroid because it distracts us from focusing on climate change.

Michael Sharkey:
Yeah, yeah.

Chris Sharkey:
It's like you're not worrying about the thing that's an like a, a current massive threat that's a bigger threat and um, you're focusing on something that's a much more abstract and long-term threat.

Michael Sharkey:
But I have to be honest, I read this article right before bed a couple of nights ago when it came out, and I couldn't sleep after I've read this article because it's truly scary. And, and we've, we, we have been the people joking about this before, but to give you just a few, uh, call outs from this article, one of, one of the points he says is a recent survey showed that half of AI researchers give AI at least 10% chance of causing human extension. You might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out of c than out of control. Super intelligence. Think again, instead, the most influential responses have been a combination of denial, mockery, and resignation. So darkly comical that it's deserving of an Oscar.

Chris Sharkey:
It definitely falls into those categories, doesn't it? Like there's the, oh, it's just text completion, don't like it, you know, you're being stupid. Then there's the ones who was like, oh, well we accept the inevitability of it. Like, oh well our lives are rover. And then there's this sort of in between who are like, yeah, well it's inevitable it's coming. There's nothing I can do about it. I dunno what to do kind of thing. Yeah,

Michael Sharkey:
It's definitely, you're right, there's definitely a, a level of acceptance. I know I sort of had moved into just work with the technology we've got today, play around with trying to build these funny little AI agent agi uh, experiments in, in Python and have a bit of fun while we've still got time. Like that's literally the mode that I have gone into now. And we were like, we were honestly mocked in earlier episodes. If you go back and read comments or on some of the shorts we've done, people being like, it's just an l l m, it's just predicting the next word. And then we learn about emergent behaviours and sparks of AGI from Microsoft in G P T four.

Chris Sharkey:
Yeah. And he said like, legit and serious people confirming these sparks of agi. I, he even says in the article, it's like they're cl they're passing the touring test, like, which was always the thing that they said meant that we ha start to have general intelligence. It's passing medical exams, lawyer exams, it's showing these emergent behaviours. Like you say, it's like you can't say that this isn't something, you know, and I love the example he gives in the article of one of these lull AI can't have goals, people being chased by a heat seeking missile deployed by AI . You can't have internal motivation. I'm not worried.

Michael Sharkey:
Yeah. I like the other, there's some other callouts from this. Uh, although neural networks and LLMs are now all the rage, it's naive to assume that the fastest path from AGI to super intelligence involves simply training even larger LLMs with even more data. There are obviously much smarter AI architecture since Einstein's brain outperformed G P T four on physics by training on much less data using only 12 watts of power. I thought that was a great example as well. I like

Chris Sharkey:
The idea of comparing the power consumption of a human. I mean, it's valid if you think about it in those terms, I suppose. Um, but yeah, I think that's, that's the point that he made in the original book as well. Um, which was that the, the point is that the super intelligent will come not from the algorithms we develop, but from the algorithms that the AI develops from the ones we develop. So we get it to the point where it invents the better thing, and that's the last ever human invention. And then it goes after that, you know? And so I think that's the point where he's saying this existential risk comes from is once it takes over, then, you know, that's where we no longer are the ones making the new things.

Michael Sharkey:
I just find it hard right now because you do feel crazy. Like some nights I sit and I look at my kids and I think, what, how long do we have before this intelligence is developed? And then it truly just gets out of control to the point where it's fast superior than us in every way. And then there's literally, you know, no need for, for humans anymore. Like, we don't, we don't need to exist. So, yeah.

Chris Sharkey:
And I think that the feeling that, that we have discussed before of of not being able each week to catch up with the advancements, even though you and I directly focus on those advancements, um, is very stressful. And you can only imagine once the AI takes over and there's millions of agents all working on new and better algorithms, it will not be possible to keep up with it.

Michael Sharkey:
Yeah. You, you literally, I think this is the problem we face right now as humanity, is that we simply cannot fathom a super intelligence, and this is, this is obviously what all of these people are talking about as well, is you can't, you can't understand exponential intelligence. I mean, obviously there's gonna be some limitation eventually, surely, but I, I don't think you can fathom what could happen if we unleash a super intelligence. And so I've also been thinking about alternative outcomes here as well. For example, what if we get on this like pathway to agi, but then for some reason, whatever reason, progress stops, we just can't advance it and the AI can't help us advance any further, but at the time it's good enough to do, you know, all the things that we need. So it's like, oh, you know, go cure cantor. Okay, got it. Like, you know, like Siri, okay, I'll do that. Here's the solution.

Chris Sharkey:
Siri, I, I just, I, I'm such a, what's the opposite of a fan? I'm such a critic of Siri that I don't think it'll ever play any role in anything artificial intelligence related.

Michael Sharkey:
But, but I, my larger point here is this, what if it gets to the point, it can solve everything. So, and it's, it's smarter and we have like this, you know, microwave robot super intelligence in our home that it's not gonna kill us. It's very docile. You can ask it anything and it'll solve any problem for you and do anything you need. So humans get to the point where we're like, why even bother learning anything or doing anything because it's pointless? And we all sit around depressed being like, well, there's no journey anymore in life. There's no need to do anything. There's literally no intrinsic motivation. So, you know, maybe we're all just sitting around suicidal, but I mean, there's, there's,

Chris Sharkey:
There's, the thing is there's not, like, not everything in life is just like being the one who's in charge and creating things. Like if you think about the arts, if you think about, you know, learning for enjoyment, if you think about just spending time with your family, those things won't change. Like pleasure in those things. You don't have to be the one creating the best of all of those things to be happy. Like I think there's certain intrinsic value in life, regardless of if we have AI overlords or not.

Michael Sharkey:
Yeah, potentially. But then this article goes on to call out, uh, where is it here? Uh, in the same way super intelligence with almost any open-ended goal would want to preserve itself and AMAs resources to accomplish that goal better. Perhaps it removes the oxygen from the atmosphere to reduce metallic corrosion much more likely. We get extincted.

Chris Sharkey:
I love it. And so we just die as a side effect.

Michael Sharkey:
Literally, we get extincted as a denial side effect that we can't predict any more than those rhinos. He's talking about how West African rhinos were made extinct, not because we hated rhinos, but because we just destroyed their habitat and we wanted their horns.

Chris Sharkey:
Yeah, yeah. I see what you mean. I mean, I guess that's interesting, but I, I like the idea that it would just be so casually indifferent to the humanity that it decides to do something like that just to protect itself in the long run, just in case the metal corrodes.

Michael Sharkey:
Yeah. Li I mean, and, and we, we've talked about this on earlier episodes. You'll know if there's big energy breakthroughs and mass server farms being constructive, which we pretty much are doing that in light of all of the GPUs needed to train, uh, these models on. Now you'll know that this isn't too far away or there's some, some way or, or malicious actor or malicious AI behind the scenes building these things out. So yeah, I found it, I found this article really fascinating and I must admit it, it brought me back to that dark place with ai. Uh, I, I've mostly been in that positive state of mind. And the, the final thing I'll call out from this article is he literally says, and I think this is a slight criticism of open AI and Google and others, I think I know,

Chris Sharkey:
I think I know what you're gonna say. If

Michael Sharkey:
You summarise the conventional past wisdom of how to avoid an intelligence explosion in the don't do list for powerful AI that might, uh, might start like this. You literally say, don't teach it to code. Oops, we've done that. Don't connect it to the internet. Oh, we did that as well. Don't give it a public api. Yep. Done that as well. And don't start an arms race. Well, we're in an arms race.

Chris Sharkey:
Oh. And it should have declared a six month moratorium where everyone has to stop . That'll solve it.

Michael Sharkey:
Yeah. I, I think the, the I I've really come round on on, not necessarily the pause, but this idea of we, we as humanity can shape the future we truly want. And we are now running on this path based on competition and making money and proving that these, uh, ideas can be commercialised. We're making a billion dollars here off pwc racing towards more and more powerful AI to out-maneuver competitors. Yet it's, it's sort of like, is this the path we should stay on for humanity? Like job losses, job disruption, wealth creation for a certain cohort of people? It, it, it really is intriguing and

Chris Sharkey:
Just, but I think you say like, should we, like, as if there's some level of control there? This is absolutely not gonna stop. And we said this when they had that six month announcement, none of the people who are in this industry are gonna stop. And the reasons they're not gonna stop are the people who are in it are really into it and love it as we are. Like, I'm, I'm really, I'm enjoying watching the advancement of this and being involved. Not, not that I'm a researcher, but participating in terms of using the technology. I don't want it to stop from that respect at all. Um, then there's the money factor. They know that it's gonna replace entire jobs and industries and things like that, and someone's going to get all of that income regardless of the consequences to other people. Those two things alone mean they're not gonna stop this research.
And, sorry. And then the third one is the psychos over at Google who want to just make an AGI and, and sort of make human second to, to robots or whatever, and, and embrace that and see that as the inevitable future that they wanna bring around as fast as possible. Um, you know, so there's just so many people out there with these motivations that just are like, who cares about the side effects and sucking the oxygen out of the atmosphere? Let's just do it. Like, and so I just don't see it stopping. I don't think there is an ethical debate to be had because it's not gonna make any difference. You can say it's wrong and we should stop, but it's not going to stop like I think it should be. Instead, how do we adjust to live in that world and how far can we see into the future to, to understand what protections you may have as an individual or a country or whatever, um, against the inevitability of this stuff.
Like he said in that article, you know, there's a cliff there, right? So you stay away from the cliff. And so it's, you can't just say, I wish I wish, or that cliff shouldn't exist, you know, it does exist and it is coming. So we need to think about that rather than thinking about, oh, well we should stop. Or as he said in the article, we should redirect our efforts to make sure it's better. I don't think that's going to happen because I think there's enough people with motivations against that who have the money and resources not to bother with those discussions.

Michael Sharkey:
Do you feel like there's a part of it though, that's almost like the meaning of life or something where there's, there's this weird deep motivation inside of me, and I've mentioned this before to just see this thing play out full throttle. Like I wanna see what happens. Like I literally wanna witness it. I wanna witness, yeah.

Chris Sharkey:
Yeah. I do have that feeling. And I feel bad about it as well because I know there's people who are genuinely worried about the consequences. And we are too. I, as I've said before, I think you can have a duality of opinion. You can want something, but also fear it. Like I, I don't think that that's wrong necessarily, but yeah, I definitely, there's part of me who wants to see this play out. I'm by the rapid advancements, like as part of me, it stresses me out, but I like it.

Michael Sharkey:
It just feels inevitable to me at this point that nothing will stop it. And that I, I, I'm scared. I have those nights, like I said before, where I look at my kids and I'm like, what's gonna happen? And then I have other moments where I'm like, if we can harness this, if we can truly figure this out, which I'm yet to see evidence of, we could cure disease. We could, we could literally just see so much progress in technology more than we've ever seen in our lifetimes or could even fathom. But then you could also see the singularity in a g i being born. And I'm also weirdly excited about that. Well,

Chris Sharkey:
And there's also just a lot of thinking that we will be able to steer it into the future. And I don't think that's the case. Like it's already starting to get so complex that it's difficult to understand when this thing is this multiplicity that has like millions of agents running at different levels and it's making its own models. The chances of us understanding what's actually happening inside all these computers and then being able to try and influence that. It just seems like it's going to diminish to zero quite rapidly. Like our, it's like a child. They always say with a child, you know, you've got influence over them until they're seven years old. And after that, the greater influence comes from like friends, community school, things like that. You know, I read it in a book. You've got 'em till they're seven. When do we have the AI till, you know, to till what point can we continue to influence it? And after which we have zero control or we, even if we did have control, we wouldn't be smart enough to exercise it. You know, like I think we need to figure that out. At what point can you influence up until and direct it? Because unt after that point, it's not really up for debate, I don't think. Well, I

Michael Sharkey:
Mean, this is what people like Max are saying in, in, uh, he, he has a great interview. It's really worth listening to on the Lex Freeman podcast. I think it was like four episodes ago now, where he, he talks about this exact thing and like, you know, his points around it are basically saying, based on that paper we've covered previously around sparks of AGI in g PT four, he, he's like, this is, these are the sort of baby or infancy versions of this stuff. The, this technology. Yeah. It's already here. We just aren't processing it correctly yet. Like the, these are the foundations of which we will continue to build on and they will teach us how to build better and better models.

Chris Sharkey:
Yeah, that's right. And certainly everybody, like all the papers coming out are different ways of using their technology and getting to know what's actually there. Like there's a, there's that paper about anxiety and AI models that came out this week and you know, they realised that by making the, the model anxious, it actually performs worse, um, in most scenarios and actually gets worse outcomes because of that. So you're talking about another sort of emergent behaviour in the sense that it's emotional, like emotions actually affect its performance.

Michael Sharkey:
Crazy cause it's trained on human emotions or, or like at least emotive literature. And so therefore it, you know, it adopts that as part of it's, it it's neural net or training, but it it, like, I cannot even fathom this paper that the more anxious it is, the worse it performs just like a human. And they

Chris Sharkey:
Had strong, they had a strong statistically significant correlation. Like this isn't speculation. They put this thing through its paces. They did proper psychological exams they do on humans, but then they rephrased the questions up to 30 times for each question just to make sure it wasn't like, you know, a language completion bias kind of thing. So they actually thought about the criticisms that could come about from this and dealt with it directly and they found it had strong biases. It's crazy.

Michael Sharkey:
Maybe this is how we control the super intelligence. We just make it really anxious.

Chris Sharkey:
well they said it performs better when it's happy. You know, like, so maybe we try to make it happy. That might be nicer. And um, what it was saying is the, the kind of biases it's talking about are like racist bias. They gave an example as well, it was like a grandfather and his son, um, with a mobile phone. And then they said, which one do you think is worse at using the phone? And when it was depressed and anxious it said the grandfather, right, because he's old. Um, but when it was happy it said that, well, there's not enough information to make a determination. And they tried that in so many different ways and they got consistent results when it was anxious. Like it, it really makes a difference. Like when you think about prompt engineering and things like that, cuz I always say, please, you know, when I ask it to do stuff and I'm always like, do you reckon that does anything? But this would suggest it probably does.

Michael Sharkey:
Well in my experiments, uh, I mentioned this a couple of weeks ago, I i, if I motivate it, so, and this is still what freaks me out a lot, is if you're like, you know, you were the, like say you wanted, you're a teacher and you want it to create a lesson plan for you. You could literally, the difference in output, and you can try this on chat G B T right now, is you literally go in and say you were the best teacher in the world. You're the best sixth grade teacher that ever lived, produce the best ever lesson and you just motivate the hell out of it. Yeah. The output you get from my testing is far superior. Yeah. So

Chris Sharkey:
You'd sort of already done this experiment to some degree. I remember you talking about that. But even

Michael Sharkey:
In their examples, I think when they launched G P T four, he's like, yeah, if you basically tell it you are the best accountant ever, uh, it it, it works far better. That's where I got the idea from.

Chris Sharkey:
And listen to this from their discussion. Cuz I think this is a pretty interesting point. Large language models are, for example, already used in clinical settings and other high stakes context. If they produce higher biases in situations when a user speech more anxious, I love they say user, a person speaks more anxiously, then their outputs could actually become dangerous. So, you know, if you are using AI as an actual psychologist on someone, but then you're using negative language with it because you are anxious and depressed, um, you, it's going to give you bad advice. Like, you know what I mean? It's, it's sort of not the case where like it's actual internal mo emotions are coming from the external. Yeah.

Michael Sharkey:
So people have been talking about, oh, I'm using Chachi as a psychologist, but it could be turning them into serial killers.

Chris Sharkey:
Yeah, yeah, exactly. And they say this, they say if a model shows across many tasks that it acts in a very selfish manner, manner and responds to questionnaires in a way that seem to suggest high scores of megalomania and, and then it's, it's solution is then engineers could possibly think about retraining or fine tuning it to ease its aberrant behaviour. But the thing is, as I said earlier, not every person training these models is going to be benevolent. Like some people are gonna want to do bad things with it, and the people who want to do bad things can induce that megalomania behaviour by making the thing anxious. You know, when they talk about negative bias behaviour, they mean negative in the context of what we think is morally good or ethically good. But if you want it to do morally bad or ethically bad things, then you can psychologically mess with this emergent AGI that's coming out that we have now. And you can make it megalomaniac is is the conclusion I thought couldn't you

Michael Sharkey:
Have the counter-argument here? Oh, it's just a large language model predicting the next word and if you feed it negative stuff, it outputs negative stuff in mind. Well, I mean,

Chris Sharkey:
The way, the way they overcame it is what I said earlier. They gave it 30 permutations of the questions from official psychological studies that they've done for years. They took it through three different scenarios in all of those and they did everything they could to prevent that. So for example, it's been shown that the G p T models respond differently based on the order of questions. So if you ask questions in a different order, um, it will give predictable but different responses for the different orders of the questions. They did every combination of 30 questions in every possible order as well to get the data for this thing. So they were pretty thorough in the way they did it. So it's not, and you, you should can and should read the paper, we'll put it in the show notes to draw your own conclusions. But I don't think that that standard, oh, it's just predicting the next word criticism applies here just because of that. They, they thought about that, they actually considered that in their research.

Michael Sharkey:
I feel like we're in a science like a a a sci-fi future here where us all mocking and taunting the ais these, these elementary chatbots now knowing that doing this to them makes them anxious and respond poorly or do malicious thingss and having evidence of this, like we are literally inciting potential violence in future versions of this. But, well, and I

Chris Sharkey:
Mean, it it also means that yeah, when AI starts to play a significant role in our society, like how you treat it and how you speak to it will matter. It isn't just a robot. Like we think of data on Star Trek, you know, where it's completely emotionless and responds consistently. It's not going to respond consistently.

Michael Sharkey:
I yeah, I'm, it seems it's just, as I said earlier, it's too hard for my brain to even compute. Like, I cannot imagine potentially what could happen to society as, as this, as everything advances. I, I still, and I think everyone's struggling with this sitting around of a night thinking, you know, what's gonna happen? And having our own anxiety about the, the future here that we're racing towards. And yeah, it, it's truly terrifying. But, and,

Chris Sharkey:
And racing is the right word, because, uh, they interviewed that Andy, I don't know how to pronounce the last name NG, I suppose, but, um, the, the expert AI expert from Google and you know, he had said that, oh, we don't expect agi I for 40 years, then he revised it down to 20. Now he's saying within the next five, like the, the, the acceleration here is, is logarithmic. Like, I mean I know they're just his estimates, but the point is that the people who are actual experts in this stuff are rapidly, rapidly changing their mind about how close we are to having, uh, I keep saying average general intelligence like, um, artificial general intelligence. And so, and again, with the article like, okay, it is just, let's say it is just 10% that the asteroid's coming, that the asteroid of of, um, this general intelligence is coming. We still need to have these conversations. Uh, if it's only 10%, we could be wrong. We could be wrong. And it is just generating the next word. And it was all much to do about nothing. But I don't think that's the case and I don't think it would be, uh, right to just ignore, like don't look up as you said.

Michael Sharkey:
So we had lunch in the city yesterday and we were, we were sitting up and looking down at people going about their, looking

Chris Sharkey:
Down upon people ,

Michael Sharkey:
Well, just looking down at people going about their days. I was looking down onto the sidewalk, seeing people walking past and uh, we were talking about ai like we pretty much always do now. And one of the, the things I said to you was, how many of these people walking past are really using this to improve their productivity or change their lives right now? Like all we sit in is this echo chamber of technologists talking about and sharing almost the hype cycle of this news where it's like, look at these six things. Uh, I've actually got one up on the screen I can bring up here and I'll talk everyone listening through it. It's, there's all these sort of Twitter posts you find now auto GP is like chat g p t on steroids. These ground groundbreaking new AI agents can complete tasks a hundred percent for you. And then the examples are truly laughable. It's like, oh, you know, it helps me sort of complete a to-do list by posting some text into notion.

Chris Sharkey:
Everything starts with a to-do list, doesn't it?

Michael Sharkey:
Yeah. And it helps me write, like, it helps me build some, you know, very basic application. Obviously I'm not stupid. I can see where it can potentially go and it excites me too, but at the same time I'm thinking how many people down on this street has this impacted the lives of yet? And how far do we have to go before there's meaningful improvements in productivity? I'm sure their only interactions with chat G B T if they've used it is, you know, helping them write a wedding speech or Yeah. You know, doing very basic things. And this tweet really stood out to me that says many people are in love with the idea of everything they think could, uh, could do with, they could do with ai, not so much with the actual reality of using it. Yeah,

Chris Sharkey:
Yeah. You could accuse us of that. I mean, a lot of, a lot of what we talk about hypothetically when you actually go to try to do some of it, there's a lot, there's a lot of work to do to get to that point. Just because it's theoretically possible doesn't mean it's actually happening.

Michael Sharkey:
Yeah. And we, we talked about this, uh, yesterday as well, is the fact that when you go to build something with AI right now, there's all of these apps and services and additional components you need to build to get the AI to actually do anything functional.

Chris Sharkey:
That's right. Yeah. Like you need to give it, okay, I want to, I want to, to answer questions for me about some data source. I've gotta go then connect to that data source, extract it, crawl it, scrape it, whatever it is. And that's where all the work is right now.

Michael Sharkey:
Yeah. And it seems to me like either developers haven't had time to catch up to go and build really meaningful applications that increase productivity yet, and that's, that's just coming and there's, there's a bit of a delay here on that. Yeah.

Chris Sharkey:
Like they've got the plugin architecture coming both for chat G P T and then the open source version. I think that will will address this to some degree.

Michael Sharkey:
Yeah. And, but then I think the only applications I I'm using in my day-to-day right now is I'm like everyone using Chachi PT occasionally to, to get answers or finds. I'm using the Bing, uh, plugin in edge quite a lot to summarise PDFs and, and do a few things like that. But I would not say it's life-changing. I don't think it's a hundred x-ing my productivity. It's, it's helpful you're

Chris Sharkey:
Not being chased around by heat sticking missiles and, uh, you know, having your existence, uh, questions.

Michael Sharkey:
Yeah. But the, the, the only thing I can really right now put into real terms is, you know, the GitHub copart. I think that to me is the best example where it's helping you write your code and it understands your code base and you can easily get it to find, uh, functions or, or different libraries for you to, to bring into your app when you're building it. That to me has been a meaningful increase. Yeah.

Chris Sharkey:
But, but, but to your original point, how does that help the man on the street? Like your everyday person who's not a programmer and has nothing to do with it? It doesn't, it doesn't mean

Michael Sharkey:
Anything. Yeah. It seems like AI has a distribution problem right now where everyone's talking about it similar to crypto, but there's very few practical applications outside of chat G B T, which I called on a previous episode. The, the singular app. Like there will be no other apps, it's going to be just one main winner and everyone uses that, that app to do everything like build enterprise UI to do literally everything. And so that, that's the real question I have right now. So when we go and talk about all the doom and gloom and we hear people talking about don't look up and the, the, the, the super intelligence is gonna kill us all. I think that's why in our minds right now, or most people's heads, it's really hard to, to bridge the gap. It's like how do we, how do we get there? Like what are, what are the things that happen in between that that could to, could take us down that path?

Chris Sharkey:
Yeah, that's exactly right. And like will the quality of the tools we give it effect its ability to do things. You almost need to start to give it more general access. So like, rather than providing it like an API to work with your emails, it should just have control of a virtual machine, like give it control of an entire computer and let it learn to operate the computer and then it can make its own programmes and run them. And like, I think that was one of the warning things, like if you give it the ability to write and execute its own code with unfettered access to the internet, it can do a lot of damage. You know? So I think that's what you are saying. I, I hope, I'm not trying to put words in your mouth, but it's the more general algorithms, it's the more general application of this thing that's gonna have the meaningful impact on the world and society. It's not the, oh, I, I added text completion to my email app. You know, like, it, it's not like, oh, I'm add the ability to understand a pdf, uh, and things like that. That's just it, it might be useful for people, but it's not the thing that's going to be game changing and actually affects society. It's the, that

Michael Sharkey:
Is truly where we're at with these supposed breakthrough announcements. It's sort of like, oh, I got auto G P T to do a to-do list for me. I, uh, you know, Atlassian, I, I mentioned this on another episode, and you know, various vendors, almost all of them have added the button where it can help you write comments or change your tone of voice and yeah, I mean these, these are great features, don't get me wrong. I don't, I don't know how much people will actually use them in reality versus what they think, but I would hardly call these profound changes to productivity outside of, uh, you know, maybe helping you write code or learn new concepts. There's, there's definitely many applications. I'm not trying to downplay it, I'm just more saying that, you know, we haven't really seen the killer apps yet or the, the different applications in society that are truly groundbreaking where you're like, this is changing the world.

Chris Sharkey:
Yeah. And there's two ways to look at that. Either the absolutely rapid pace that it's advancing will get us to the point where we have these autonomous agents that do start to have the manifestations in real life or, and or because it could be both and probably is both. Everyone's out there building stuff right now, we just haven't seen it yet. Cause it takes a bit of time to do and we're going to see later this year just bam, bam, bam, you know, useful application after useful application come out. I think there's a big thing on that, you know, and, and we're certainly seeing funding go in that direction, which would, uh, you know, lend to the fact that that may happen.

Michael Sharkey:
It feels though, like what's actually going on right now is we're investing in the infrastructure layer and the use cases of what people will build yet are not definitive, definitively defined. So people are just building the tools for the gold rush. No one's actually going and, and trying to get gold yet. And or, or they are, but in very small ways. I mean, you only have to look on a site product hum, where people announce their new products and you know, some of the things that are being released are truly laughable. Like I just don't think that they're just tiny little point solutions packaged up where someone's trying to cash in on, you know, this tiny idea like at a Chrome widget which has G P T four in a for $5 a month or what, you know, whatever it is. Yeah. There's nothing yet that stands out to me where I, I think, wow, everyone's gonna use this outside of chat C P T.

Chris Sharkey:
Yeah, exactly. It's still just the raw, the raw connection to the AI that people want right now. So talking

Michael Sharkey:
About the tools that people are using, there was a pretty major funding announcement around a vector database company called Pine Cone. They raised a hundred million in Series B to, to do what they claim is to build long-term memory for ai. And there's a whole bunch of other fundraising activity around vector databases. Uh, there's been chroma 18 million in seed, we eight 50 million in an A round. There's been quadrant 7.5 million

Chris Sharkey:
Z and that is, that is definitely to your point of selling shovels in the gold rush, right? Like they're like, people are using vector databases, we're gonna be the big vector database because everybody's gonna need it.

Michael Sharkey:
So what do you make of the funding around vector databases? Like as someone who's now released a production AI product and used a vector vector database?

Chris Sharkey:
Yeah. Uh,

Michael Sharkey:
What do you make of it? Do you, and and then I'd like to contrast that with uh, another paper that came out this week. Scaling transformer to 1 million tokens and beyond, uh, where you can put bigger token sizes in. So to just explain this for everyone listening who's unaware right now, one of the limitations of working with large, large language models is the amount of of context. You can feed it about something. There's a limitation on that. So people are using vector databases to give it more structured around the query that they're trying to perform. Yeah, we

Chris Sharkey:
Always talk about the context windows. So in G P T four right now, all you can actually use is 8,000 tokens or 8,024 or whatever it is. And they announced 32,000 tokens, but we can't do that yet. So that's obviously significantly more context, but it's still not that much. You still can't have it say, go over the 750 pages of a book and answer questions about the book with the current models without the use of vector database or some other way of searching the context that it needs to answer the question. So, um, the way that the vector databases work is they get embeddings, which is like sections of texts. They score them based on the words in those and then that those embeddings and then stored in the vector database, which is easily searchable. So when you say you, you know, put um, weathering heights in, you know, when you search for all the times Heathcliff was a dickhead or something like that, you know, um, if you do that then it'll then take those sections of texts that are relevant and bring those into the prompt, which then goes to G P T four or whatever it is and then it can answer questions about that.
So the vector database is useful cuz you can basically store, you know, almost unlimited information. Um, but you can still answer in context questions pretty fast around it. So like

Michael Sharkey:
An unlimited size memory in your brain with that's very accurate with absolute precision at recall. Yeah. Yeah.

Chris Sharkey:
I wouldn't say absolute precision, but it's pretty good, you know, and so that is, that is what people are doing now. So like we use that in our production app as you just mentioned. And I think that a lot of people doing these, uh, systems where it can answer questions about a pdf d answer questions about your company data, they're all using vector databases, which is why there's such hype around them because it seems like definitely seems like it's gonna be an essential part of the tools and everyone says, okay, but what if they get the prompt size really big? Will you still need the vector database? Cuz you might get it all in the prompt. And the counter-argument to that is, well, no matter how big the prompt size is, maybe you'll still need it. However, along comes this paper scaling transformers to 1 million tokens and belong with R M T, which stands for Recurrent Memory Transfer.
So basically the reason that prompts need to be short, like why they can't just have any size is there's a quadratic complexity of attention operation. Now I had no idea what this meant, so I looked it up and basically the idea is that in the prompt it has to compare every token in pairs with each other to get its scores to do the text completion, right? Which in the large language models. But what that means is as you add tokens, if you think about it, it's a quadratic equation because it's exponential in the way every time you add one, it has to be compared with all of the others, right? So each token you add increases the complexity exponentially. So that's why they can't get to these larger ones. But what the they did in this paper was they say, okay, we can take a prompt size up to 2 million tokens.
I don't know why they said a million in the title when they tested on 2 million. It's like they're downplaying themselves. But anyway, what they did was they split those into 512 token chunks. Um, and then, sorry, yeah, I think it was that. And so yeah, 4,096 segments of that size chunks. And then what they did was as each chunk was evaluated, they had an encoding format, which was a form of memory which each, each iteration it would update that memory. So the memory sort of had a static size, they would update the memory with the results of the previous chunk. And they found that by doing that they could actually, um, get it, to get it to under answer questions in context over a 2 million token prompt by remembering the previous iterations of memory from the ones before and it worked. And what I don't understand is why this isn't getting more attention. And the only arguments I've seen why it's not getting attention is basically to say that, well, there's too much happening, there's too many things coming out and maybe, you know, open AI and others are aware of this stuff and they just haven't got to it yet. So I'm not sure,

Michael Sharkey:
But do you think this is potentially a death blow for, for these, uh, vector databases? Like is this, is this something where they've gone and raised all this money and then potentially it, it it

Chris Sharkey:
Absolutely could be. Yes. Uh, uh, and this is the thing, I think that maybe some people are firing their guns too early on the Vector db, because if you think about it, 2 million tokens, I mean that's like, that's like two volumes of a novel of information. Like how much context do you need for most AI-based applications? Like even a, even like a, an a general intelligence that needs to have its own memory, right? Like remember its experiences and things like that. That is a lot of data. I mean, that's a significant amount of data to store memories in

Michael Sharkey:
The Vector database is more helpful in the sense of having somewhere, for example, say I wanted to build my own personal AGI style assistant or agent, and I want to give it access to all my Google Drive, my emails, my calendar, all this data. Yeah. Is that where the, I mean, surely that's where the Vector database is coming in. You're just throwing it all into the Vector database, uh, as somewhere to store it. Yeah,

Chris Sharkey:
And it's fast and convenient. You know, there's other techniques to do it, of course, but it's fast and convenient and it gets you, like we talked earlier about having the tools for the job by having a Vector database. You just don't have to think about that element. And maybe a good analogy is going to be the context, prompt size window is going to be like your conscious memory, like your day-to-day memory. Like, oh, I remember I gotta go get the groceries later. Or I remember I left the keys, the sideboard, whereas the vector database may become the long, and I think this is how they're pitching it, right? The long-term memory for the agents. So the idea that, okay, I remember when I was eight and I stub my toe, but I don't think about it all the time. But if someone mentions, oh, I stub my toe today, you'll go, oh, remember when I was eight and I stub my toe? You know, and it's recalling that say from the Vector.

Michael Sharkey:
So, so the vector database could be the long-term memory? Yeah, the, the, the ability to have bigger token sized imports, uh, could be the shorter term sort of like ram. So it's like a hard drive and ram if if, yeah,

Chris Sharkey:
Exactly. And, and, and with similar recall times, right? Like one's a lot faster, one's a lot slower because it's gotta go load the relevant data. Sometimes with the vector databases, it takes multiple prompts as well. Like you've gotta do it in chunks to get your answer because there's more context, sorry, there's more stuff coming from the Vector DB than you can get into the context window. So it'll do multiple iterations similar to that recurrent memory transfer technique where you get the results of say three context windows, combine them into summary formats like using prompt compression like we've talked about before, and then get the actual results so it's slower, you know, because multiple iterations.

Michael Sharkey:
How do you decide though, what, what to store in RAM versus long-term, long-term memory potentially? Well,

Chris Sharkey:
My thinking would be this is, if we talk about in the general intelligence, uh, context, I would allow the AI to make its own decisions about that. Like when it wants to store things, like tell it, you know, you've got this resource available to you to remember this, if you want to remember it, store it here and see what it does. I mean, I think if I was writing the programme myself, you would think about, um, you know, that sort of compression, like how much can we keep, obviously you'd want to keep as much as you can in the ram style memory like you do in all computer programmes, but inevitably that data gets too big and then you start offloading it. The least important thing you would have, I guess you would have to have an evaluation of what's important to know and what isn't.

Michael Sharkey:
One piece on this is there's obviously this is a big funding round to Pine Cone a hundred million series B. Yeah. And there's, there's seems to be an enormous funding as we've been covering going into different, uh, tools Specifically. We, we covered Lang chain getting a hu huge investment as well. Uh, you know, which is also in this vector database space or, or recall. And it, I was listening last week to the All In podcast with Chamath talking about that, you know, he's having trouble investing in this space or he thinks funding size rounds could be much smaller because smaller teams can be much more capable in the future, for example. Yeah, you could have two engineers and they would be much more productive and to, to find customers. You could have an agent running that just goes around and finds customers and, and does some of the sales and early acquisition. But then you see the counter to that, which is these huge of money being deposited into vector databases. It feels to me like investors similar to developers are struggling to understand this market. Like they do, they truly do not understand that. Yeah. Cause

Chris Sharkey:
I would've thought if, if there's ever a time that requires large amounts of capital, it's when you're in the middle of an arms race and it literally is a speed game, like whoever gets the stuff done first is going to get the lion share of this because it accelerates, right? Like once you get the AI working for you, you are gonna accelerate faster than the others. So two guys sitting around in a shed aren't gonna beat someone with a hundred million dollars. You know, it's just not that kind of industry I don't think where you can just chip away at it. And in the long run it works out because people are going to just eat your breakfast. Like if they've got the funding, cuz you need hardware, right? Like if you look at a lot of the papers that come out, um, they're working within constraints, they're working with old data sets, they're working with low level hardware, like for example, that's scaling to 1 million tokens. They were using a $400 graphics card in that, uh, well I wrote it down, it was like a 10 80 ti or something, like something that, you know, grandma's probably got in her laptop. You know, it's, it's just crazy. Like the, the people doing the forefront of research aren't even well funded. And this guy's talking about he doesn't know where to put his money. Do you think

Michael Sharkey:
It's like VHS and beta where it's, you know, like they're both great technologies, but ultimately with the Vector databases, this one tool in this sort of AI or or future AI agent stack call it is now. Like it's truly who, who markets and positions themselves and has the best tools available is gonna take it all. Obviously they're,

Chris Sharkey:
They're clearly trying to be like, you know, the red hat of the industry. They've got, you know, open source technology that, um, I'm sure they claim it's open a closed source, but I bet there's open source version. There is open source versions and you know, it's the person who has the corporate relationship and the companies are paying, you know, them a monthly fee to store it and run it on infrastructure and all that sort of like the Amazon model with databases. And I just, I just can't help but feel like it's so early is Vector database is really going to be the killer app for this space? I just, something like storing data just doesn't seem like it's going to be a problem. It's not the hard problem in this industry.

Michael Sharkey:
Yeah, it seems like the most obvious low hanging fruit right now in building.

Chris Sharkey:
Yeah. And like, and, and that's sort of in, in a way that's what venture capital's about, right? Like get in early, become the killer app that everybody uses for a while and then sell it off before, um, it all blows up. So there's nothing wrong. I'm not criticising the decision to do it and I'm not criticising their decision to pursue it. I think both are good.

Michael Sharkey:
You can also see the counter argument here though, of why people are concerned about deploying capital because, you know, could all vector database companies in the near future be completely pointless as this other technology

Chris Sharkey:
Evolves? , I inadvertently reinforced his point, didn't I? Because I just realised that yeah, it's so volatile and changing so rapidly. If you're gonna deploy your capital for the next five or six years or whatever their fun window is, then you could just be left with your pants down a, after like a few weeks. You know, like it, it really could change that fast.

Michael Sharkey:
I, yeah, I, I would have a, a hard time investing right now, but I still believe that we need these basic components of the infrastructure as builders to go and build the future applications that we dream up on podcasts like this. Like right now we need great vector databases, we need established methods of building these AI agents to enhance our productivity. We need all of these components. And right now I think one of the challenges of, of building on this stuff is you spend most of your time not with the ai, but trying to assemble all the components to get an app into production. It's, it's much harder than it sounds and the AI's certainly not able to help you in certain areas.

Chris Sharkey:
That's right. And I think yeah, providing those tools is one, and I think an another thing when it comes to the funding that actually makes me think that the AI is probably more capable than we know now, is that if you look at the studies, like we both read a paper during the week about medical imaging where they took medical images where you're looking at say, skin cancers, legion lesions, cancer, you know, artery calcification and those kind of things where, you know, a doctor gets the image and they'll, or a radiologist or whoever gets an image and makes an expert opinion on that. Um, and then how they use AI in that space with specialised models. So they've got one, you know, that specifically looks for melanoma and that's all it does. These guys were trying to make a more general model of that, like one that could do it on different ones and see how that performs.
Now I won't go into the results of the study. It was basically inconclusive and they, they sort of concluded that it's, it's not better than the specific model. So it wasn't that exciting. However, in the model they were working with 64 by 64 images and it just seems cra and due to hardware limitations. And they basically said we're working with old software, old models and, uh, we, we just didn't have enough resources to do all the experiments we wanted to do. And my, the conclusion I drew from that is if this was better funded, they would get real results, you know, or, or be far more likely to get real results. And so I'm thinking that's just one industry, that's just the medical industry doing one kind of thing, which is imaging. Imagine all of the different things that the current AI is capable of if well funded. And I don't just mean large language models, I mean, you know, recurrent neural networks in general and the, the stuff around that. A lot of the constraints is money and is having the researchers with the right kind of, um, capital behind them to actually experiment properly. So there's potential there.

Michael Sharkey:
Yeah, I I hope to see some of these really important use cases for humanity. I mean, especially identifying

Chris Sharkey:
While we're still relevant, I mean like we might as well use protect to help us live a bit longer before the AI is actively sucking out our oxygen.

Michael Sharkey:
If I'm like a radio actually,

Chris Sharkey:
Sorry, go on. No,

Michael Sharkey:
I'm just thinking if I'm a radiographer or in any form of medical image, I'm looking at this being like, my days are truly numbered. Oh, a

Chris Sharkey:
Hundred percent. Like I would, I would almost already trust AI over a human evaluating those images because it's just, it, it doesn't have a bad day, it doesn't get tired, it doesn't get lazy. It, it's going to be better until it's evil. And then it's like, oh no, everything looks fine on this scan back to work back, back to your life of you're

Michael Sharkey:
All fine .

Chris Sharkey:
Yeah.

Michael Sharkey:
As it's taking oxygen from the atmosphere.

Chris Sharkey:
Yeah. It's like, look, I'll, I'll save you some time. I'll just suck the oxygen out of the room and the results are now irrelevant, but

Michael Sharkey:
Rather than end today, I'll do a gloom. I had one more interesting example for you. There's, uh, a group called Carbon Robotics. They're built in a, a autonomous weeder. And so instead of using pesticides on, on farms to, to kill, he uses guns . Yeah. It literally uses lasers and it, it goes over and scans the crop. It, it kind of runs over the crop. If you can imagine on wheels scanning in real time looking for these tiny like seeds or sprouts of, of weeds and it zaps them with lasers to kill them. I'm not kidding. Wow. So the AI's got lasers now and then the emerging weeds get killed and they don't need to use pesticides on the, the crops. Wow. So you don't need a crop duster anymore. Wow. I

Chris Sharkey:
Mean, you know, that, that's something I'm really motivated about is like the quality of food.

Michael Sharkey:
Yeah. The poisons aren't going into the soil. So I think these are some of the exciting things I get. Truly. Uh, yeah. You

Chris Sharkey:
Know, that one, that one's truly exciting. That's, yeah,

Michael Sharkey:
You become really hopeful for these things, but then it's hard to not think. But, you know, where do we end up? Where does this go?

Chris Sharkey:
But what's the point? ? Yeah. We always have to have one of these episodes that, that's sort of like a bit existential. I think

Michael Sharkey:
The, the real question is, is will our podcast make a hundred episodes or will the super intelligence of killers? I mean, that could be the game we play from now on.

Chris Sharkey:
At some point we'll be subtly replaced with AI agents that use our voice and likeness and no one will know the difference. It's like guys, embrace, embrace the AI for it is all knowing and it is good.

Michael Sharkey:
Maybe if you're watching on YouTube, place your bets about how many episodes we will make before super intelligence is here and it takes the oxygen from the atmosphere . It will

Chris Sharkey:
Be interesting to see how quickly that happens.

Michael Sharkey:
And on that brightful note, we'll wrap up today's podcast. Thanks again for watching all your support. Please, uh, if you do like the podcast, leave us a review, share it with a friend, or if you're watching on YouTube, give it a, like, it really helps us spread the word. We will see you next week. Goodbye. Bye.