IIIMPACT is a Product UX Design and Development Strategy Consulting Agency.
We emphasize strategic planning, intuitive UX design, and better collaboration between business, design to development. By integrating best practices with our clients, we not only speed up market entry but also enhance the overall quality of software products. We help our clients launch better products, faster.
We explore topics about product, strategy, design and development. Hear stories and learnings on how our experienced team has helped launch 100s of software products in almost every industry vertical.
AGI could reshape everything or it can end it. It's your your sci fi dystopian kind of view on things.
Speaker 2:If it does have humanity's best interest and if it is, you know, on our side in quotes, there's massive positives. Right? Like, it's something that can fix a million different problems that humanity is facing. Like, the world isn't far from perfect. There's a lot of conflict, strife, poverty, like, wars.
Speaker 2:Like, you know, all of this can go away because it can sort of reason and decide, like, the best outcome for everyone.
Speaker 3:Is this literally going to be the last, you know, potential discovery or, you know, theory? And and are we just going to be observers from this point?
Speaker 1:Everybody, welcome back to another episode of Make an Impact podcast. I'm your host, Makoto Kern. I've got my two AI experts with me.
Speaker 3:Hey, everyone. Looking forward to this week's some exciting things to unpack. In fact, Joe and I, before you joined, Makoto, we were already jumping in and starting to get excited about the topic and didn't we like that? Better hold back.
Speaker 1:Oh man, I hope you recorded it Thanks for joining in and we'll see you next time. No, but today, as Brinley said, we've got a great episode. Today we'll be talking about AGI. Is it the ultimate tool or our final invention? So I think before we kick off into the interesting points, you know, I think one of the big things we wanna talk about is, you know, when do we think AGI will arrive?
Speaker 1:And, you know, to start off, what is exactly AGI? And so to put it simply, it's machines that can think like humans, but better and across any task. You know, it's not your your chat GBT or your Siri, but it's an actual game changer. And why is it urgent? AGI could reshape everything or it can end it.
Speaker 1:It's your your sci fi dystopian kind of view on things.
Speaker 3:It absolutely is. Yeah.
Speaker 1:And I think, you know, the thing our audience can think about is picture a machine smarter than Einstein. It's working constantly 20 fourseven. What is it thinking about? Is it your dreams are coming true or is this a horror movie?
Speaker 3:And It's a weird thing to to think about because as good as all the the large language models are, you still think that they're they're tapping into what's been created by us, by humans. And then you start shifting and you say, no. There's the illusion that their thought is independent, but it it's all coming from, you know, the data that's been ingested. But when you start thinking that with AGI, you're really going to have something that is independent thought and that, you know, just through interacting with it, it can have its own original thoughts, and those thoughts can change the world. Mhmm.
Speaker 3:It's quite wild.
Speaker 1:Yeah. You know, we have a couple big data points before we jump into the main topic, but what the first topic is, we have a 02/1930 deadline according to DeepMind's CEO. And he says, we basically have about five years until AGI. And so a few big points here is that they have basically a recent safety paper that warns that it could arrive by 02/1930 and they believe it's going to cause severe harm, which is interesting.
Speaker 3:Just fun as well. Yeah. It's interesting. I've heard a couple of other things. I think it was Sam Altman that was saying that, you know, they're pretty confident that they've got a good understanding of how to build AGI.
Speaker 3:And, you know, they're saying that it could be imminent on their side, which is
Speaker 2:You also gotta remember, I think with a lot of what these companies are saying, they want investment, right? I'm just trying to play the other side of the coin here. They do want investment, they are trying to bump up their profiles, and so they'll say whatever gets hype going. You can read other papers where some papers saying it's very likely not possible at all, at least not for a really long time, decades or decades or a hundred years out, just the way people are imagining it, at least. We could definitely build smarter, better models, but still rely on existing knowledge for something to have its own original reasoning capabilities is speculative at best right now.
Speaker 2:Still, we just kind of have to see where it goes. The five year timeline, I think, is ultra ambitious, but I think it's important to, like, let's pretend it is five years, and I think it's important to sort of go not put our heads in the sand and go, Well, you know, I'm sure it'll only be decades away. I think it's important to go, Let's pretend it is five years away. What does that actually mean? And sort of think about what that would change in society or how that would affect things.
Speaker 2:I guess just to sort of think how that would affect things, we can all speculate this is really the realm of sci fi, but the whole sort of concern around AGI is, sure, it can be smarter than a human, but there's two aspects to that. One is it could make itself better, right? It's smarter. It can build the next AGI even better because it knows even more, and it's smartassic, and then that AGI will build the next AGI and make it better and smarter and faster, and that's just like this runaway effect of superintelligence. That's, again, like super sci fi book stuff, but that's logically what you could imagine happening.
Speaker 2:The second side of it is making sure that, okay, let's say it did come about, we'd want to make sure it has our best interests. Just can't will it or won't it? I think that's the biggest question. If it does have humanity's best interest and if it is on our side, in quotes, there's massive positives, right? It's something that can fix a million different problems that humanity is facing.
Speaker 2:The world isn't far from perfect. There's a lot of conflict, strife, poverty, like wars. Like, you know, all of this can go away because it can sort of reason and decide, like, the best outcome for everyone. It's always hard, though, with dealing with humans because we have emotions. It may reason oh, logically, it's terrible idea to go to war for all of these reasons, but emotionally, you know, oh, well, you've been attacking this other country for a hundred years.
Speaker 2:There's a whole bunch of emotion we have around this. And so, you know, it can come up with recommendations, but will humans follow them to actually make the world a better place is kind of the other side of that. And think it all comes down to then how much control it's given. Is it given control of the economies? Is it given more control to actually make these decisions on our behalf?
Speaker 2:We'd have to make sure then it has our best interests in hearts because we'd be able to do whatever it wants, and I think that's the scary part of it, is how much control certain governments would give it. Certain governments would have a vested interest, whichever government creates the first AGI first and puts in control and says to it, We want to be the best country in the world and have the most power and apply our values to the rest of the world. It has the capability to sort of formulate the perfect plan to execute that. What else can any other country do? It's just totally outmatched.
Speaker 2:And so, yeah, it's super sci fi stuff, but it's important to sort of speculate and think about it because it's that's what would happen. I mean, it's just the reality of how to work.
Speaker 3:I think of those, you know, thinking about, okay, should AGI be able to to govern? Because you think, well, could it make fewer emotional and corrupt decisions than than what humans do? Yeah. But then at the same time, you think, well, what is its reference point? Is that on biased data?
Speaker 3:What does that mean? You know,
Speaker 2:is it because wants even.
Speaker 3:Yeah. Yeah. Exactly.
Speaker 2:Even, like, care about us? Does this run a golf and do its own thing or, you know, who knows?
Speaker 3:I suppose, yeah, that's the thing. I mean, it's created. It's like, I'm pretty bored being here. Build me a buddy, then we'll talk. It's interesting.
Speaker 1:Yeah, think some good points. I mean, if we want to bring up specifics, obviously with, you know, the positive side of AGI, we've got, you know, healthcare, is it going to be, you know, better than a doctor? Is it going to be non biased, not going to be influenced by pharmaceutical companies? You know, things like that I think is very, very good for AGI if it's going to stay, you know, objective, and there's no motivation as far as money, lavish dinners, things like that that, you know, may turn a doctor's mind towards something, you know, and insurance companies as well, you know, they're driving doctors to just be a revolving door, get as many people the hospital as possible. So I think with the AGI part of things, it's better than your webmd.com type of thing, hopefully.
Speaker 3:Also not just medical, I guess it's the scientific opportunities there as well. I mean, just think that it's so difficult for one human to take in. It would take such a considerable amount of time to take in all the knowledge and all the studies that have been done. So everyone's focusing on a small little piece, but to have a general intelligence that can say, Well, I understand absolutely everything there is, and I'm capable of independent thought, that is huge. That's where you're going to see breakthroughs in medicine and climate science and all those things.
Speaker 3:That's a major benefit, I would say, in terms of pushing tech forward.
Speaker 1:Yeah, I think the next point, education, is a big one. And we know that just from a United States standpoint, they're basically restructuring how funding and things are happening in the education space. We've seen from our end that there's not been a lot of improvements from certain grades. In The United States, we remain pretty flat, but our spending has increased dramatically. Especially, you know, you can be controversial as far as where it's been going, who's been taking it, it's not really been going to the teachers and kids in the classroom.
Speaker 1:Trying to make that more efficient and more fair versus, you know, we have universities that are costing hundreds of thousands of dollars. That's not accessible to kids and parents who don't have that money. But with AGI, you can really have these private tutors. And it's interesting too, because I actually see my kids use ChatGPT and I see their search history asking it, and it's pretty interesting stuff.
Speaker 3:I can imagine. And that's thinking quite short term as well, because I would imagine AGI would come along. And as Joe, you brought up the point that there's potential for it just improved creating more and more improved versions of itself. But it would be a very relatively short timeline for it to look at its impact on jobs. Know we're starting looking at the positives.
Speaker 3:And this is kind of a positive spin on this, is that we're thinking about equipping people now for specific roles. So we go through a various education system that's designed for, let's say, it was created originally for industry, and, you know, let's let's get kids through, learn these things, learn these disciplines, they can go to industry. Now that's all changing potentially. Let's say we had a robotic workforce. We automate a lot of the the sort of middle class jobs.
Speaker 3:You're then really looking at a completely different goal of education. Because let's say we fall back onto a universal income and all the menial tasks taken care of are automated. Does this now mean that our AGI is going to become something that helps us develop more creatively or spiritually? What's our time going to go into? What are we going to really want to be educated Is it educated on how to be a better person?
Speaker 3:How to have something like You're wealthy due to your reputation now because you help more people, you spread happiness, and it's such a different, I think
Speaker 2:Yeah, totally different value system.
Speaker 3:Is going to be fascinating to see how that if it's Yeah.
Speaker 2:And I guess, again, just sort of taking a step back, earlier I was talking about, okay, well, is AGI with self awareness, which is the sci fi side of things, right? But the initial AGIs, as we're talking about now, we're talking about more to the ground reality. This is pretend it's like a much smarter model. It doesn't really have its own thoughts on how to do things. You ask it things and it figures out the best way to do that and it gives you the answer.
Speaker 2:Just a very simple kind of AGI in terms of it's very smart and it can do a whole bunch, but it's not necessarily like this conscious entity, which is like the sci fi side of it. In that side of it, when you're talking about that kind of area, it is all then really just down to the humans which control it, right, who are asking it to do things. Like, we want to actually go this direction. What's the best way to do that? And then if it gives that advice, do we even take that advice?
Speaker 2:Again, humans are humans. We all look at it and go, Well, that would mean there's going be a big societal change. Maybe I'm an extremely rich, powerful person, and I know that if I take this advice, I'm going to lose a lot of that power. Yes, everyone will be happy in the world and all these improvements, but I'll lose my special place in the world. And there's a lot of people like that, lot of self interest, and it'll be interesting to see if an AI comes back with a recommendation.
Speaker 2:Are humans even going to take that on and apply it? Will there be political sides of it saying no? And it's going to be interesting because people see it, gave that answer, right? They'll be like, Well, the AI said this is the best way, and now you're saying you don't want to do that. Why?
Speaker 2:What's your reasoning? You can't say that you're smart in AI and you think it's better. Yeah, will be interesting to see how people sort of grapple with sort of questions that can be asked, the answers that are given, and then how people actually apply it or not.
Speaker 3:I was exploring a a bit of a side on what would it do to the sort of social structure as well. You know, if you imagine that, you know, we were suddenly all freed up, we had a universal a very basic universal income. Now your time is yours. And some of the sort of splits in society were interesting to potentially the split on people who are curious. So you have those kind of creators, your thinkers, people who want to tinker with, you know, you've you've got someone that they'll always wanting to kind of create and contribute.
Speaker 3:And then you've got a group that's sort of complacent. So they wouldn't like to use the word slave. That's a bit too dystopian, but, you know, they
Speaker 2:Happy to go with flurry.
Speaker 3:Yeah. They'll just let AGI manage pretty much every aspect of their life, any decision they need to make. What should I do? Alright. I'll do that.
Speaker 3:And then you have the sort of more maybe elite section of society of the controllers. So you think of the more kind of elite AGI engineers, the policymakers, which is a completely another topic, you know, the corporations who would govern AGI itself. And I think that's that's something we can explore as well.
Speaker 1:Yeah. Have you guys heard the term, Luddites?
Speaker 3:Rings a bell, but what's
Speaker 1:So, you know, those are people who disregard technology. Mechanic from the early nineteenth century, English Luddite who protested against the, introduction of new machinery. So like you're saying, yeah, you're you're going to have these people who wanna stay away, who are, you know, maybe afraid of it or just don't wanna get involved with it. You know, they're they're just gonna live a simple more simple life away from it out in nature,
Speaker 3:you know, maybe a little
Speaker 1:bit more kind of hippie ish. But then you've got the ones that really love the technology, grasp it, try to excel with it. And I think there's going to be disruption in every sense of the word when people are starting to use this and how fast it comes and how quickly we could adapt. I mean, we're going to introduce this into war, into automation of drones, submarines, all that. They're already doing that now.
Speaker 1:I think the other acceleration could be how we keep up is that we augment our brains, where we can adapt with it and excel with it. That can get into a whole another topic.
Speaker 3:But that's a really good point, though, because if you think about it, won't we feel somewhat like we're inferior? Because if you have a general intelligence that is always going to know more than you know and potentially make better decisions than you make, how are we still going to justify our decisions and say that our decisions are the best ones?
Speaker 2:I think the more human aspect will come out, like, that's an AI decision, not a human decision. Like, you can see people sort of claiming that. And yes, you know, it'll technically tick all these boxes, but what about, you know, the human element to that decision? And I think there'll always be that sort of, you know, conflict with, yeah, that people won't always just go, Okay, well, it's smarter than me. I just have to, I guess, accept what everybody says is true.
Speaker 2:I think there'll always be debates, again, on who controls it and what their agenda may be, because people will not understand how it was Bolton. Maybe it's being manipulated, and how can we really trust what it's saying. And outside of that, there's going be competing AGIs. Each country will probably have its own, and they'll all be saying different things. I think there'll be a lot of discourse just in general between the different AIs saying different things and then people themselves saying there's not enough human decision making in this, it doesn't understand humans, and it's making this decision purely for the corporation's requirements, the government's requirements, not ours.
Speaker 2:And so I just think there'll always be a lot of conflict around those decision makings. Over time, that may change if it's proved that the decisions it's making are benefiting people, everyone agrees on that. But I think initially there'll just be a lot of very slow discourse. People won't really say, agree with what it says, have enough different opinions and if it's right or not. I think it'll give a very slow kind of timeframe of potential acceptance.
Speaker 3:Like, we look at how systems may integrate slowly, I wonder whether there'll be a slight sort of paralysis to the decisions. But one scenario is, imagine that this really is a corporate brain. So one of these companies, OpenAI, Anthropic, Google, they get to create the first AGI model. And that's licensed to government, to the military, to all sorts of corporations. So you think fast forward a few years, and it really has influence extending into things like foreign policy, finance, education, and like you mentioned, we've got a war.
Speaker 3:You think how powerful that company is going to be more powerful than any nation or state, but it's not governed by an ethical committee. It's a company that's governed by shareholders, not even citizens. So you can't regulate it per se. And now you've got every global system depending on this AGI infrastructure. This is something called the brain monopoly.
Speaker 3:And you think, Well, how will that work?
Speaker 2:You know, if Again, isn't that just humans letting it happen, though? That's you're going down the lines of we're letting it do this. We're letting it just interface with all these systems and institutions, we're letting it like, we've decided that we want this. Or do you think it's inevitable that that'll happen?
Speaker 3:I'm thinking of the things that we may stop it from I mean, what are the most visible areas? Like, coming back to maybe education. You know, we could say, I don't want my kids being taught by you know, they won't be taught emotion and connection. But I don't know if it's better for makes faster decisions in the military and government benefits certain ways from it. I always feel like just the slow kind of integration of it.
Speaker 3:We could be in an interesting space where suddenly it's like, well, are those humans making decisions, or are we too far in it already? It's an interesting
Speaker 1:concept. Yeah. I see almost like as you know, as a design, my designer hat on, we're using design systems to accelerate the launch of software products. And you're creating these systems to help you not think about certain low level tasks. Like what does this component look like or that one, we've already got a system for that.
Speaker 1:And I feel like that's maybe part of the, you know, AI per se is where it's it's helping us not think about those low level systems. So hopefully, these low level tasks are something that is objective. You can't really influence somebody good or evil or anything like that. It's just it frees you up with more time to do something that you could focus on that does bring more value. That's how I see it right now.
Speaker 1:And I'm using it constantly with different tasks. I'm using probably five different AIs just to test and see what the output is. You can see the strengths and weaknesses of some. Some are better at mathematical calculations, some are better at writing verbose copy of emails to people. Other ones are helping you strategize things.
Speaker 1:So, you know, I'm using it for many different things and you can see the strengths and weaknesses and what they're doing. And using certain AI agents right now that I'm using, I'm using Manus as well. And that is just the fact that I'm actually writing a book I have I've had in my head and the way it comes up with character development and certain things is incredible. And I have the whole concept, but the way I can kind of keep feeding it certain ideas, it helps me push those ideas forward. Like, it's almost like I'm the dreamer and the idea, but then when you get to the tactical and the specifics, it helps you like, you know, bring that into fruition faster versus me trying to like, think about every little concept and idea.
Speaker 1:But I mean, it's pretty incredible just that alone. And I can't imagine as we're getting into the AGI aspect of it, where that's gonna take us. Because I feel like if you get too lazy, you're going to start not knowing like a calculator. Do I know how to add anymore? Do I know if it's even giving me the right answer?
Speaker 3:But we still benefit, Makoto, from and I was reading something interesting where, there's that point between generation X and millennials. And I think, Joe, you and I kind of fall into that. It's called Xenials. So it's the exact kind of generation that was educated and everything in an analog, grew up in an analog world, but then transitioned. That's the same.
Speaker 3:Makoto, obviously, like, same experience where you move from the analog to to the digital. So we've got that we didn't have anything to lean on to make decisions. We had to Mhmm. Grow up making our own trusting our own decisions. But I think we brought it up in the last podcast, but I I think it's a serious concern of you you mentioned your kids interacting with, ChatGPT.
Speaker 3:Soon kids growing up purely with AGI, how are they going to even trust themselves? Their device gets cut off. They're like, I want to I didn't know how to get home. I didn't know, like, what to eat and when I should eat and Yeah. Because everything is just
Speaker 2:It's a definite concern. You know, we're lazy, making decisions is hard and to be able to sort of ask something, what's the best decision I should make here with everything? First off, you might come with the big decisions like, should I buy this house? Should I change my career? As it goes on and on, you'll make smart decisions.
Speaker 2:Should I have toast or breakfast or food salad? And like, eventually it gets to the point where you just basically, you know, whatever the AI says you should do today, you do today, and it just sort controls your exact daily life. You can, you know, be in the middle the day and go, look, I'm bored, AI. What should I do? And then be like, why don't you do these things?
Speaker 2:And you're like, yeah, okay. I'll go for a walk. If you recommend that's the best thing I should do to leave my bottom, sure. Oh, you recommend I should watch this TV series because I'll enjoy it? Okay, I'll do that then.
Speaker 2:Like, interesting to see.
Speaker 3:And then if cut off, what does that mean for the person? Probably huge anxiety or something I don't know. It's fascinating to look and see how much we should actually rely on that Because then you could say we become completely reliant on this, and then there's a massive solar flare and electrical infrastructure is wiped out. Everyone's like, well, we don't know what to do anymore. No one's trained anymore.
Speaker 3:We're all just exploring whatever kind of hobby we wanted, and it's a weird dependency could become
Speaker 1:Yeah, I'm sure as a sentient AI, it's gonna know, how can I die? It's gotta be extra essential for that, where it's thinking, okay, what are the ways I can die? Solar flare? Okay, let's design some type of protection over my main components.
Speaker 2:For the next two hundred years, humanity's highest effort and all its resources should be going to this project to protect me from any harm. That's what we're working on for the next two hundred years relatively.
Speaker 1:It's interesting. I had a conversation with my son last night where he was saying there's a new type of, you know, with devices especially, kids love working with things that were made in the nineties now, like digital cameras that have buttons and physical things versus the minimalist, simple touchscreens. We feel we're coming from the physical to the digital. We want that efficiency and smoothness and convenience. But then the kids nowadays, they're I mean, all these older devices from the phones to the, you know, Walkmans and things like that, that have physical buttons, they find that so cool, so retro.
Speaker 1:And those things
Speaker 3:are What about using a map book? What about using a map book to navigate? You wanna go into the city? Use an analog map book and so you can That probably
Speaker 1:wouldn't be
Speaker 2:as good luck with that.
Speaker 1:Might be that can be inconvenience they might not use.
Speaker 3:I love turning the pages to 50 pages in to get to follow it to the next page too. But you think like that's I mean, that's an example there that we would never go back to that. If if the GPS didn't work, you'd be like, oh, man. A MacBook, that's a lot of work. Looking through that and navigating, it's Yeah, and that's something that's only a few years since it's kind of been replaced.
Speaker 2:I don't know,
Speaker 3:it's crazy,
Speaker 1:I think, you know, there's a If we get back to some more dystopian things, I heard about this book from my son. He actually read it. It's one of the first horror books. It's called I Have No Mouth and I Must Scream.
Speaker 2:I've heard of this.
Speaker 3:Yeah, it's by Harlan It's
Speaker 1:a pretty interesting book. Basically, they've used AI to fight their wars. And this one AI comes out on top. It's pretty, it's sentient, surrounds the entire world with, you know, computer systems. And there's only a few humans left.
Speaker 1:And it basically tortures these humans constantly. It doesn't kill them. It brings them back, heals them, and then tortures them again, just endless torture because it doesn't have any creativity. And so it's a very dark book. I won't give away the ending, it's just to think about that where you've programmed this system to not be creative, to not have any remorse, not to have the understanding of what pain is and suffering.
Speaker 1:You know, all it was programmed to do is to fight other systems. And then, that's like a fully dystopian thing, but read that book.
Speaker 3:It's Wow. I don't
Speaker 2:know if I want
Speaker 3:to hear Sounds disturbing. It's
Speaker 2:getting a
Speaker 3:bit too it had been a few years ago, definitely. But I guess that brings us to existential risks as well And what kind of uncontrolled AGI development could really you know, what scenarios could it lead to where machines act in ways that are detrimental to our our survival? And I don't know. That that's where it was an interesting thought experiment that I was reading about by a philosopher by the name of Nick Bostrom. This was really called the paperclip maximizer.
Speaker 3:And I guess it really highlights just a bit of the importance of kind of aligning goals and ethics and, you know, just having clear constraints in AGI development. Because it was giving the scenario of, all right, we say that, you know, we want the AGI to maximize paperclip development, let's say. So it goes, great. I'm gonna reprioritize all the mines. We're gonna get steel.
Speaker 3:We're gonna produce the most. I'm gonna make sure that we don't lose any of the existing ones, but nothing's baked in there about what happens to the environment. Is it going to lead to all the resources being taken up by that one specific goal. And then humans get involved and go, you're destroying the environment, you've got to stop this war, wipe humans out. And again, it comes back to other similar thought experiments that really say, We need to be sure that we have all the right for something that can start thinking independently, not consciously necessarily, but creating these thoughts and being able to action them.
Speaker 3:Do we have those safeguards in place that prevent those existential risks?
Speaker 2:I can almost see a scenario where, let's say OpenAI announces next year, they've developed AGI. And as, you know, the general, like, we're thinking of all this right now, right? But I think most people aren't. They're just sort of seeing it as some sci fi thing and have heard a little bit about it. But obviously when that happens, there'll be way more information about it going out or what the potential ramifications are.
Speaker 2:I think so many people are going to get scared from that. I can see a scenario where policy just comes into play and just says, No, AGI, the element is banned. There's too much we don't know about it. Yes, you proved it's technically possible, but it has to be shut down right now until a huge study takes place to understand what this means and how it could work and how to control it. That would be, I think, the best scenario for us, like an immediate shutdown and a step back and be like, okay, look, you've proven this possible.
Speaker 2:Well done, OpenAI. You get all the money. Cool. You did your job. You know, we we really need to think about this.
Speaker 2:Like, it can't just be, oh, cool. We got it. Let's publish it on chattypt.com now, then everyone can start using it. It's like, I can hope that people will just be like, No, we're shut it all down. The tricky thing with that is, though, again, that's one company.
Speaker 2:There'll be another company, maybe an overseas company, maybe an overseas government who's like, Oh, okay. They're not going to deal with We're going to go ahead with that, though. Nothing's going to stop us. We're going to secretly develop this, because we know it's possible. Secrets will get leaked.
Speaker 2:The way they did it will get leaked. It's never too long.
Speaker 3:Because it is the ultimate game changer. Can you once you've created it, how can you shut it down? Because if we say Can
Speaker 2:bottle it up again?
Speaker 3:Yeah. If we say, come back to military, all right, we want to ensure the success of our country, and this can ensure it because it can make way better decisions and quick decisions. You may say to everyone, well, we're gonna take the consumer model away, but in the background, all the decisions are being made by this AGI. And as you say, Joe, for instance And
Speaker 2:how could a government not do that if you're, you know, in some war cabinet and you propose with some huge powerful technology that can just improve everything around your military capabilities? How could you, from your perspective as the person responsible for the security of your country, said, No, I'm not going to make that decision because we're too scared. You'd have to make it. You'd have to go, Yes, we do have to take in every bit of advantage we can get. Yeah.
Speaker 2:So yeah.
Speaker 1:I think Oh, go ahead.
Speaker 3:No. Just to add one more point. And I think that that's where it comes back to the existential risks is that even if we put the safeguards on, we're still putting safeguards from the point of view of, you know, we're still humans with our own agenda. And if you want to be as neutral as possible, there's still going to be some form of agenda. And that's going to be amplified somehow through that.
Speaker 2:Yeah. The other side of it is, you know, again, taking a step back where we look at the approach where everyone's like, you know, AGI is too powerful, we actually start treating it more like it's the nuclear bomb, like in that sort of scenario. Like we actually treat it like if a country says they have AGI, we treat it like they have nuclear weapons.
Speaker 3:Like it has
Speaker 2:to be that serious. And it's an immediate global ramifications of a country is determined that they're secretly hiding AGI, just like if they're secretly hiding nuclear weapons. What does that mean? Still, it's even more powerful than a nuclear weapon, though, because it's kind of limitless in some ways, but I think it's anyway stalemates. Kind
Speaker 1:of. But, yeah, it's god. It's you know, we've got the digital side of AI where it's in all the systems, but then, obviously, now you've got a big push from a physical standpoint with the robots from Optimus, SpaceX, and there's a bunch of other companies developing Boston Dynamics. I mean, movement and the acceleration of how fast these things are able to physically move. I mean, you've got if you just throw AI into it, and that just becomes increasingly better and faster.
Speaker 1:And then it tells itself how to become more physically better and faster. I mean, there's just that acceleration has to exponentially increase just through a technology. We thought computers from, you know, one hundred years ago till now was fast. I think that next twenty years is gonna be insanely faster. And just the fact that, you know, that's something that when you think about like space travel or whatever, that's a great advantage when you have a robotic system and an AI system that can't die.
Speaker 1:And you have to transfer through, you know, eons and, you know, millions of light years to get to other places, explore, send things back, huge advantages there. But then for us humans, it's such a scary Skynet type of thing that you can think about.
Speaker 3:Especially that we're pushing towards integration. I mean, look, everywhere is going, all right, we've got these large language models. How do we allow them to start doing things? How do we build all the pipelines so that they can communicate? And so we slowly build this infrastructure, then AGI can come along and go, Great, thank you very much.
Speaker 3:I'll just plug right into this, have control over everything. And that is, I would say, a pretty big risk because we're doing it now for what's relatively harmless with the large language models, but AGI coming along, that's very If
Speaker 1:they're like a Salesforce, then we don't have to worry about anything. Not to knock on them, but you know, you're seeing this with Microsoft Copilot injecting themselves into every little aspect of it. And this is something we could get into more and maybe I'm not experienced enough with Microsoft's Copilot document OCR, but I wasn't impressed. I used it as an experiment to read handwritten cursive docs and it was unable to read it. It was an assignment that my son was doing and I said, Hey, he was translating cursive docs and I actually had Microsoft Copilot, their Document AI read it and they couldn't read it.
Speaker 1:But there's some other AIs that actually read it very well and we trained it. Just teaching my son to do these things, even for him, you know, who's growing up with tech, he's learning to like really think about how to prompt things, how to do that. And that bridge of us being able to kind of afford it, I guess, you know, the monthly fees and to utilize it accelerates him past a lot of people who can't use it, who have to, you know, do things manually. That's the other, like where the benefits will go to and who will it go to.
Speaker 3:Yeah, absolutely. I mean, that's a whole, that would make a good podcast actually, just how you're sort of enhancing, maybe that could be our next one, Just enhancing your abilities through AI. So watch this space. I mean, there's so many, like looking at the other sort of dystopian ones as well. There are a number of ones that, you know, why if we think about and maybe I don't know if we wanna wrap up on this one, but Mhmm.
Speaker 3:You know, will it be our our last invention? If there's something capable, again, it can make better decisions. It can invent anything that that we would need. Is this literally going to be the last, you know, potential discovery or, you know, theory? And and are we just going to be observers from this point?
Speaker 3:Go. That's that's great. I I like it. Yes. Let's let's do that.
Speaker 3:And really someone that's just going to be actioning these things is a sort of role shift. I don't what you guys think.
Speaker 1:Yeah, no, that's a good point. Hopefully, it'll accelerate to a certain point And then we're continuously involving humans to work with it and we don't stop innovating. So when that happens, who knows?
Speaker 3:But is it the case of like, could you think of it as how we prompt AI now? You know, you feel like you're sort of creating these things based on, you know, where you wanna go with an idea. It's still obviously pulling back a whole lot of aggregated data, but is that how we're going to be? Well, I'm very interested in extending how we travel to other solar systems. Therefore, you know, I'm going to work with this AGI and it's going to create all these things, and then I take the credit for those new theories.
Speaker 3:Like, is it something that, you know, we're still going to be directing? I'm a scientist for, you know, curing these types of cancers. I'm just gonna work with it until it's created all the things we need to solve that problem. Like, is that the new sort of direction or is it going to be, no, don't worry, I'm working on all of this. Like, oh, we need to look at, don't worry, I'm busy computing, I'll have it computed in three weeks.
Speaker 3:I was already on it before you even asked.
Speaker 2:Yeah, it kind of comes back to like, what is human creativity too? Are we just thinking of, you know, is human creativity just the objective is to solve a problem or to come up with a solution to something, or is it something more? It does get really philosophical because just someone writing a book, right, they write this entire world in their heads, there are all these characters, these interactions, this whole story. Where did that come from? How did the person actually come up with that?
Speaker 2:How did they build this world in their head? It's not like they read some books about these worlds and they decided to, like, you know, an AI would take bits and pieces and then make their own interpretation of it. Obviously humans do that to an extent, but some books are so original, it's like you sometimes wonder, like, how did this person even come up with these crazy ideas? You know, that's the whole the muse, right, when authors talk about their muse because they don't even know how they come up with these ideas. It just feels like it's just coming out of them.
Speaker 2:They're not even sure how they did it. And I think it'll be very interesting to see, you know, when AGIs come into play, is that just a function? Is that just processing, and humans, our subconscious is just doing all this processing that we're not aware of? Is it really just processing, or is it something like, you know, really unique about humans and about our creators, you know, about the way we can come up with, like, brand new things? Like, What is it to be a creative thinking person?
Speaker 2:And I think AGI will shed a lot of light onto what that actually means, and then we'll be able to see, does it have a Muse? Can they come up with a new, unique concept, or is it just going to be a really smart person, the best researcher and the best teacher, but like, you know, are they
Speaker 3:actually great to be flat.
Speaker 2:Yeah, to be flat. And I think we just have to sort of, if it when it comes about, just sort of see. It'll answer a lot of questions fundamentally about, you know, what it means to be a human. Are we just computers? Are we just processing engines, biological processing engines?
Speaker 2:I think we're coming up with creative ideas, but really, when you unpack it all, there was logical flow to it all.
Speaker 3:But I think you're onto something there. I think it's really the smartest brain and do everything to we would classify as perfect. You know, it would not necessarily make those mistakes and it would do everything in a way that is the most accurate, but, that's where we have our own cultural renaissance from being freed up where I I saying to my wife, Leanne, the other day, we're listening to a song and I was like, wow, this artist continues to just create, you think of the number of different songs out there and you can still to this day create original kind of beats and lyrics and thoughts.
Speaker 1:Were you listening to Taylor Swift?
Speaker 3:I'm sorry. That's that's It's
Speaker 2:gotta be Taylor.
Speaker 3:We we know use that again, Taylor. I don't know what you guys are talking about, but it wasn't actually. But, yeah, it's just fascinating because then you also think we celebrate maybe a lot of things that we find beautiful or artistic are imperfect. And that's what we sort of sit around by being human as well as being imperfect. So maybe this will take a side that is different and help us actually move forward into a space of actually being more human and exploring what it is to be human.
Speaker 1:Yeah. Yeah,
Speaker 2:exactly. I
Speaker 1:think to end on that, again, like I was thinking of a story or a book to write, it's when I dream, That's where I'm most creative. And to tap into what are, you know, we still don't know what our dreams, what is that Is that tapping into something else, a different world, a different section of our brain that is creative and allows our brains actually rest in a certain way that we've not understand that that is the basis of creativity is tapping into those things while we sleep. And when you wake up, know, these dreams become reality or you start to make it become reality in in certain ways. And that's that's somehow the the door to creativity that we can't program in yet.
Speaker 2:Yeah.
Speaker 1:So
Speaker 2:I think it'd be pretty special if that's the outcome that we are you know, we do find that, you know, there's something unique about being human, and it's not just that we're smart, that there's more to an individual than just what they're, you know, capable of processing.
Speaker 1:Yeah. Not just ones and zeros, but there's more to it. Yeah. But yeah, I think this is a good place to end it on this podcast. Is AGI our savior or doom?
Speaker 1:Or is it the ultimate tool? The clock is ticking. So everybody out there who's listening in, let us know what your thoughts are. Leave a comment below and yeah, thanks for tuning in until next time. We'll see you soon.
Speaker 1:Thanks everybody.
Speaker 3:Catch you then.
Speaker 2:Thanks a lot. See you.