“Thinking Machines,” hosted by Daniel Reid Cahn, bridges the worlds of artificial intelligence and philosophy - aimed at technical audiences. Episodes explore how AI challenges our understanding of topics like consciousness, free will, and morality, featuring interviews with leading thinkers, AI leaders, founders, machine learning engineers, and philosophers. Daniel guides listeners through the complex landscape of artificial intelligence, questioning its impact on human knowledge, ethics, and the future.
We talk through the big questions that are bubbling through the AI community, covering topics like "Can AI be Creative?" and "Is the Turing Test outdated?", introduce new concepts to our vocabulary like "human washing," and only occasionally agree with each other.
Daniel is a machine learning engineer who misses his time as a philosopher at King's College London. Daniel is the cofounder and CEO of Slingshot AI, building the foundation model for psychology.
Hey there. I'm really excited for this one. I'm having Joel Lehman on, an AI researcher and friend. I wanna talk to Joel about his book, Why Greatness Cannot Be Planned, which he coauthored with Kenneth Stanley back in 2015. For some context, Joel and Kenneth have been doing research together in AI for the last 2 decades.
Speaker 1:They worked together as research scientists at Uber and OpenAI, and they wrote this book based on some insights from their ML research during an era of AI pretty different from the one today. The thesis of their book is basically that leaps of progress don't come from explicit plans. They happen by chasing what's interesting. I wanna ask Joel about the relevance of his book to his work in AI today and what the AI space may be getting wrong by pursuing things head on, like AGI, with too short time horizons and either too much or not enough ambition. Let's jump in.
Speaker 1:So, Joel, for those who haven't read your book, Why Can't Greatness Be Planned?
Speaker 2:Yeah. So the book is an argument that while a lot of our society and individual ideas are structured around really directed effort towards a goal that oftentimes the kinds of greatness that is most celebrated really are not generated through those means. And there's different ways of looking at this, but many of the things that we point to as greatness were never planned in the 1st place, and it'd be hard to imagine them as kind of a directed goal that you move towards. So, for example, the emergence of rock and roll, like, something that's super important and awesome innovation in the space of music, but no one said, like, I want to create rock and roll. No new that there was, like, an explicit need for it.
Speaker 2:It had to be discovered implicitly. Another angle for this is that when we talk about maybe innovation in the space of science or technology, that the stepping stones to get to some place great, to get to a new creation, a lot of the stepping stones have to be laid by somebody else that when you're trying to achieve something yourself, it may be that you're relying upon things that other people have built, and then you couldn't, for example, accomplish that what you set out to do before somebody else did something for some reason that is different than why you actually ended up creating something. To make that more concrete, For example, computers are dependent on on technologies like transistors and initially things like vacuum tubes. And those things were not created with computation in mind, and yet it'd be hard to have scalable computation, without them. Yeah.
Speaker 1:And I think you talked about, like, if you were living in ancient Greece, you really you probably would want the Internet, and maybe you could even formulate vaguely an idea of, like, instant communication between anyone in the world. And you could formulate that part. But the part you couldn't formulate was, like, anything to do with vacuum tubes.
Speaker 2:Exactly. Yeah. So if you found yourself as, like, a cave person and the fundamental technologies available to you are, like, fire in the wheel, it'd be crazy just try to say, like, I'm gonna make a laptop now because you're so far removed from the underlying stepping stones that would actually make that technology possible. And what you really should do is just do basic science for a while. Like, forget about the dream of computer in the short term and just explore, like, okay, what can I do with fire?
Speaker 2:And learning about the world for maybe 100 of years, 1000 of years before you could actually begin to target the thing you initially wanted to.
Speaker 1:And what's tough and unfortunate, I guess, is that if you were a cave person, you would have no way of knowing if you're a year off from having the Internet or a 100000 years. And, it's very possible that there were cave people who set out and said, I'm gonna build the Internet, or I'm gonna build, you know, a time machine. And, unfortunately, the path they took not only like, it was terrible, like, because it made no sense. So I'm curious. Like, it's an interesting book as an ML engineer, and I know that you started it because you discovered certain elements of this pattern in your own machine learning work.
Speaker 1:But, like, today, fast forwarding a while from when you actually published the book, there are a lot of companies out there, a lot of efforts in AI that are very directed, right, where people say what we need is an AI doctor, AI lawyer, AI accountant. What are your thoughts on, on those very directed plans right now in AI?
Speaker 2:Right. Well, first, I also wanna acknowledge that, this this book, Why Cranios Cannot Cannot Be Planned, like, I I wrote it in concert with Ken Stanley, and so it's a collaboration between me and him. And the original insight actually came from something that predated my involvement, which is, like, him noticing something in the system called Pic reader. And I've I did that research with him, afterwards on on something called NALI search, which is like just an algorithm we we worked on together that that provided the seeds of the of the rest of the book. But to answer your question about directedness and machine learning, yeah, the challenge always is, like, am I within striking distance of this goal or not?
Speaker 2:And I think, like, the devil's in the details there when it comes to things like, generative AI AI applications. Like, can I make an automated doctor or coder or whatever? And it probably depends on the scale of your ambitions. So there's some things that generally AI is currently good at, some things it's not good at, some things it will be good at shortly, and some things, like, that maybe will be really hard for it to get at good at in the short term that might actually require fundamental advances. So one job of someone who's, like, an innovator or something is to know, or a visionary is to know, like, what actually is within striking distance.
Speaker 2:Like, when's the right time for the iPhone when, for example, so many initial smartphones kind of ultimately kinda fail? And I guess, like, it's a hard question to answer in generality, but it really would require, like, yeah, some hypothesis on your part about, like, what's within striking distance.
Speaker 1:So I guess if we're pattern matching, the other way to look at this is so first, there's, like, what's actually the end goal that's within striking distance? Because it would be amazing if you had the insight that something was actually right in front of us within striking distance. No need to pursue basic research. We can actually just hit the target because it's there. But the other side is there are a lot of people in ambitious fields that are aware of this, know that we're not within striking distance, and actively pursuing stepping stones.
Speaker 1:Right? So, you know, an example would be in, like, quantum computing. Like, no one thinks that we have, like, consumer level quantum computer tomorrow, but a lot people believe that they are working on the stepping stone. Is there anything paradoxical about working on a stepping stone?
Speaker 2:Certainly. Yeah. There is something paradoxical about it. Yeah. Like, being involved in in the field of AI research where the ambition is quite gen quite ambitious to create something that's as smart as us or smarter than us.
Speaker 2:At the same time, that you recognize that historically, it's been quite far away. And so while there is something paradoxical about it, you can take that into account as you pursue things. And the insight would be that that the path to getting to some place ambitious is to do really broad exploration. And that could mean that, actually, as an AI researcher, you should have interest outside of AI because the stepping stones to an idea is that could actually make AI happen. There's no guarantee that they actually are going to come from within mainstream AI research.
Speaker 2:Could be, like, actually, philosophical insights are useful or advances in other fields. Like, one way of looking at this is that the modern kind of deep learning phenomenon is based on GPUs mostly. Of course, people are creating other kinds of compute accelerators right now, but GPUs were created first for, augmenting graphics and video games mostly. And that's sort of like a stepping stone that someone has laid that actually impinges on the kinds of architectures that people use. And I think there's a, someone named, Sarah Hooker who created a paper called, like, the the hardware lottery, which is highlighting that a lot of the algorithms we use are kind of dependent upon this.
Speaker 2:And if we had a different technological paradigm, like a like some other compute thing that someone creates for some other purpose, like, that actually could inform, like, new kinds of architectures that right now aren't efficient or are useful. So it is paradoxical. And at the same time, there's, like, practical things that you can do to kind of, like, try to get around that paradox.
Speaker 1:I mean, I think, like, the interesting insight here if you just, like, look at specific stepping stones like the GPU is that they're often not what you expect. And so maybe part of what's paradoxical is, like, I love the imagery you you use in the book of, like, you're standing at the edge of tell me if I'm getting this wrong. But you're trying to, like, cross a river, and there are, like, literal stepping stones in front of you. But it's super foggy, so it's really hard to see which stones are ahead of you. You can just go to the most direct route.
Speaker 1:Just look to the other side. Go that way. You know which way to go. Right? That's easy.
Speaker 1:But you're almost certainly gonna fail because if you just take the stepping stones to the other side, there's a really good chance you'll just reach a point where there's no stone ahead of you, and then the only choice is to turn around. So if everyone starts going down that most direct route possible, chasing the obvious stepping stones, we're all gonna fail. Whereas if we kinda just branch out, there are a lot of other strategies societally that'll get us to the other side better than just pursuing the most direct route. Did I get that right first?
Speaker 2:Yes. Right. Yeah. I think the the phenomenon you're pointing out to is something we might call deception, which is, like, you have some heuristic you're using to try and to get closer to a goal. How and, like, maybe a machine learning has, like, accuracy or something on a training set or something.
Speaker 2:In other fields, it'll be, like, a different heuristic you're using to to how close you think you are to achieving the goal. And it just turns out that oftentimes there are dead ends in that space. And you can just look to the history of machine learning for this, which is that in the early days, you know, there's other paradigms of AI that were ascended that people thought this is definitely on the path to, like, AGI, things like symbolic reasoning, what now we call, like, GoFi, good old fashioned AI, which is more symbolic in nature. And neural networks actually were, like, quite denigrated for different reasons and different points of the trajectories have been boom and bust cycles in AI. And it's just that the space of AI algorithms is itself deceptive.
Speaker 2:That the heuristics we use to judge, like, how, you know, how close we are to, like, AGI could be misleading. And right now, it seems like present, you know, neural networks are again descendant. And it could be that, you know, actually, we are now within striking of AGI, or it could be that actually new fundamental advances, new algorithms, or potentially, like, new hardware that could accelerate algorithms in different ways are needed to get over that finish line, but it's really quite unclear. That's kind of always the situation in research generally. You just don't know what's gonna lead to what.
Speaker 1:That's, like, the fogginess. But we do know that, like, or we can guess that more diverse approaches are more likely to collectively get us to the other side than chasing the most obvious path possible. And I guess that is that part of the thesis?
Speaker 2:Yes. Right. Like, that there's there's signals beyond heuristics of progress that we can use. That there's sometimes, like, we we can understand that things are likely to be deceptive because we have a sense that, you know, if a goal is sufficiently ambitious that probably our intuitions about what's close are gonna mislead us. Like, if I was trying to design a time machine, like, first of all, we don't know if that's physically possible.
Speaker 2:And secondly, like, I have no idea what the stepping stones would be that would lead to to that if it's possible. And so whether heuristics could we use? And in some cases, like, heuristics of, like, just interestingness or novelty. It's something knowing that something is different than something else is reliable in a way that knowing how far you are from a solution can be unreliable. So, like, if you're going through a maze, for example, just if you hit a dead end trying to go a different direction is a good strategy.
Speaker 2:And the devil will be in the details about, like, what makes something interesting different? What makes for a good stepping stone? And we could also talk more about that.
Speaker 1:But at least novelty would at least lead to diversity and societally success. The more paths we take I guess, relevant to AI here is just, like, GPUs were a good example of, like, if you were focusing on algorithms, you would never find GPUs. And now today, I mean, the paths to AGI could be analog computers. Right? Like, it could be coming from material science or physics or chemistry or biology or, like, completely divergent things from, like, the obvious things to work on.
Speaker 1:And so something about, like, the paradox of working on stepping stones, I think, is that if you're chasing the next stepping stone, that's obviously the next one of the path where you say, like, the path to GPC 7 is GPC 6 and 5 and 4. Then we're at 4. Next is 5, then 6, then 7. That's how we get there. But what you just did is you planned out where we're gonna go.
Speaker 1:You planned out the stepping stones on the way there. You chose the most obvious ones. Everyone else chose the exact same ones for obvious reasons. And it's possible that you're right. It's possible it works.
Speaker 1:But there's a really good chance that if everyone pursues that one idea, we're gonna fail. Whereas if someone else just says, actually, what we need is, like, sublinear models or something, like, who knows if that works? But at least it's novel. It's interesting. It's different.
Speaker 1:It's definitely, like, we can all intuitively agree that it's more interesting to work on, like, some new algorithm for attention than to work on a larger GPT 5. Right? At least more interesting, not that it'll work or be in the right direction. You're hesitant. Yeah.
Speaker 2:I think it's the tension. So oh, no. I think the the tension is that it's hard to read the future, and it could be that, you know, scaling in terms of data and compute is, you know, gonna get us across. Like, the GP 7 could be amazing just doing that. But there'd be multiple reasons to diversify.
Speaker 2:One would be that it's one way to escape competition. That if you're gonna do something that's different some than somebody else, you're making a different bet. And it could be that through that different bet, you succeed where somebody else fails, and you're rewarded for that, which is nice. And, basically, societally, we we try to encourage the 1st bets for that reason. That's why, you know, why we do basic science.
Speaker 2:That's why investors probably do look sometimes for, you know, what's your differential advantage? What's your way of of kind of avoiding competition? That might fail, and maybe a lot of these diverse efforts will fail. But, societally, we benefit because some of those paths may win the lottery ticket of of kind of, like, being a really good stepping stone that no one foresaw.
Speaker 1:Yeah. It's an interesting one to talk about avoiding competition just because I think, like, it's definitely a nice strategy, definitely societally successful. But I gotta wonder. I gotta push on this and get your pushback. But if I am trying to build a big company right now, isn't it a good bet to make that we are within striking distance and then to pursue the, like, obvious bet on the off chance I'm right as opposed to if I believe that we are stepping stones away, and therefore, I take a divergent point of view where I'm avoiding competition, I'm creating a whole new approach that no one else is.
Speaker 1:You know, that must mean I'm, like, less likely to succeed. Or even if I'm right and I get there and I get over that stepping stone, all I'm at is at another stepping stone where I still have to get within striking distance of some goal, which means that by the time we get to that key point, there might be competition where I'm at. Or is, you know, the conclusion, like, go be super secretive like OpenAI, and it's the only way to make progress? What do you think? What's the argument against competition being good?
Speaker 2:Well, I think competition is good. It's a good force in general for honing things. And, also, there's a way to kinda get beyond the binary of either, like, you're pursuing a stepping stone. You're in, like, in a competitive market. And, I guess even within if you think, like, you're within striking distance, there's multiple stances you can take to differentiate yourself and to make bets that other people aren't making in more narrow cases.
Speaker 2:So it could be like, okay. I'm betting on that, you know, that current generative AI is gonna be good enough for what I'm doing. But how am I gonna differentiate from everyone else? What's my differential bet here? And so, like, one way of thinking about this is, like, in the space of generative AI, let's take that for an example.
Speaker 2:When you have, like, a new medium, oftentimes, the first take at sort of, like, creating with that new medium is going to mirror an old medium, like the analogy being the first motion pictures were just taping stage plays. But stage plays and movies have different affordances and, like, movies have become an art form of their own that have diverged significantly from from stage plays. And so, similarly, if generative AI is kind of taking some something that existed in the past, there's gonna be new affordances, new ways of reimagining, like, how you make a a doctor or, a coder or something. And maybe the way that you can still explore novel stepping stones and avoid competition, in some cases, just taking a different stance and trying to to find a new affordance or or new way of doing it that someone else won't do. And that way, you can still, you you know, make some space.
Speaker 2:So there's there's, like, exploration doesn't have to be a binary that either you're doing it or you're not, but that you're doing different ways where within your, like, tolerance for risk, it makes sense to do that. In other words, like, exploration can be very risky. If you just, like, put all your bets into basic research, like, there's no guarantee that, like, in your time horizon, whatever your time horizon is in this company, that you'll strike gold. And so you wanna, like, mitigate that risk by by doing things that, you know, have some chance of success, maybe more chance of success that you you kinda think reliably you can get to, but maybe still are gonna be different or, result in a product that somebody else wouldn't create.
Speaker 1:That makes sense. So I I'd love to get concrete here because I think, like, when we're talking about, you know, there's explore and exploit, and part of what I'm hearing here is, like, you can still do both, and you can also exploit and find explorative strategies for exploiting. You could still try to find a way to, you know, use technology and still explore. It doesn't mean that you do extremely basic research. It doesn't mean you go back to the drawing board and say, screw neural nets.
Speaker 1:Let's try something totally different. You can do that. That's a strategy. But you can also there are a lot of paths and even just divergence in, like, where on the scale still counts as diversity and interestingness. Yeah.
Speaker 1:One big fork in the road for AI companies is basically model training versus application building. So there's the question of, like, should I, as an AI company, pursue and to some extent, this is a huge divergence in, like, basic versus applied research. I'm like, do I wanna be training my own models? It's gonna cost me 1,000,000 of dollars. I need to collect a dataset, collect people.
Speaker 1:Then almost certainly, I'm gonna have to do something novel because there just aren't that many trained models. So there's no, you know, playbook I follow versus application building where you're betting on basically, like, no. The fundamentals are either already done or almost done. But, like, we're at a point we are at a later point in the stepping stone process. We're within striking distance of an application.
Speaker 1:I'm gonna try to build the application. What I wonder is, like, do you think that the theory of greatness cannot be planned? Does it push you towards, you know, more basic research? Would it push towards, like, do more model training, or is it kind of just, like, both are options and, you know, diversity is good?
Speaker 2:I think it's hard to answer that question in the in generality. I'm I'm not trying to be evasive, but, yeah, there's some cases where, like, model training is, like, really interesting and exciting and important for your domain. And especially if there's things you think that'll be hard to do through, like, an API kind of access. Like, if you need to deal with the internal representation of the network in some way, if you could if you could leverage that somehow, there's more exotic things you wanna do. Like, some things from the literature might require you to have access to model weights and to train them yourselves versus kind
Speaker 1:of, you
Speaker 2:know, like API access from from one of these, like like OpenAI or Anthropic or something. There are so many, for example, like, prompting strategies you could use for, you know, off the shelf models or something. There's a lot of exploration to be done, like, on all fronts and it really maybe would depend on on your application. So I feel like I'm not saying much here, which is, probably not what you want.
Speaker 1:But I'm trying to force you to put some teeth to the argument. And I have to think, like, at very least, there there's something about just like if you imagine the state space of all possible strategies here. If you're imagining like visually the stepping stones while you're trying to get to the other side and then you imagine just like you can draw in 2 dimensional space where people are standing and just be like, a lot of people are standing right here. No one's standing over there. And then it just seems to me, like, if you are training a model, at very least, there's a massive space of things you can do, massive space of directions you can go, what data do you use, how long do you train for, what parameters do you use to train on, what data, How much do you try to overfit, underfit?
Speaker 1:What size model? You know, and how you run-in for there's a lot you can do. What architectures do you use? Do you try something novel? On the space of, like, I have OpenAI's API.
Speaker 1:I can pass data. And it's just a massively smaller space. Like, doesn't that itself like, shouldn't the state space size have some impact, you know, according to this theory?
Speaker 2:Yeah. It's interesting why posing it. I think, again, it maybe, like, I think, again, to retreat to abstractions. I think, like, the job of, you know, someone who is, you know, like a visionary is to is to notice, like, when stepping stones are kinda snapping into place and something that wasn't previously possible is now possible. So, like, when I think of entrepreneurship, I think of, like, you're trying to make, you know, a profitable company that is going to, you know, provide something that people are willing to pay for.
Speaker 2:And maybe some of the stepping stones you needed to, like, create this new product, like, they snapped into place with the API. And now you can do what it is you need to do, and there's no need to train new models except for maybe for economic reasons. Maybe, like, it becomes too expensive to use API and you need to train them yourselves. Or it could be that, like, actually, you notice, like, that there are stepping stones snapping into place on the training side, which is, like, to do the thing you need to do. There's, like, this novel capability, and you see that that actually is larger state space of training models is really critical, and you have, like, an insight as to why it is that you would be the right person to kind of to snap those stepping stone to place for your application so that you get the thing that you wanna get.
Speaker 2:So I think it really depends. It also depends on your expertise. Like, if you're someone who comes with a lot of, like, machine learning expertise or your company has that kind of expertise, then diving into the larger state space of, like, training models makes a lot more sense than if your team doesn't have that expertise. And we've been always just saying, like, ands. And sometimes it's actually good to be naive.
Speaker 2:And so, like, if you even if you didn't have expertise, even playing around could be sometimes kind of useful because you're gonna be questioning received wisdom. So, again, maybe I'm just, like, covering all the bases and saying that much. Yeah.
Speaker 1:I mean, I want you to put a little more bite, but I I see how it is. I I feel like the whole point is that you are acknowledging. What I'm hearing is first, like, the why now, and second is, like, the why us kind of thing. Like, on the why now side, the theory does seem to lead to the conclusion of, like, you need to have a theory about what stepping stones we just passed. Because, like, if you want to be doing everything yourself, you're gonna fail.
Speaker 1:Like, you just can't. Like, we're talking about standing on the shoulders of giants, humanity. But if you have unique insight about why now, because you believe that training just reached a certain point or inference just reached a certain point or whatever it is, then you have unique advantage or unique reason to pursue this that might be different than other people because you believe something is within striking distance. And that unique belief should push you, but it's basically coming from a reflection about what stepping stones do you think we just, put got into place as you put it, I think. And then the why us thing also leads to equally to divergence interestingness just because if you believe that you are well placed to work on something, that might mean that you are better placed than someone else.
Speaker 1:And therefore, you're sort of avoiding competition by chasing what you're good at, which might not be what everyone's good at. Am I getting that?
Speaker 2:Yeah. That's right. And it's also, like, your unique sense of what is interesting and being, you could say, receptive. There's, like, a quote that, you know, scientific progress often results not from the but, like, the, oh, that's interesting. Like, there's something happening here that I didn't expect that's interesting to me.
Speaker 2:Like, the history of start ups is characterized often by pivots where you cat initial hypothesis, you're pursuing something. And at some point, maybe that is kind of working, but there's actually you're receptive to, like, wow. This thing that I didn't wasn't going to be central to my business, it's kind of snapped into place that this could be. This is actually what people are responding to. And you could be, 1, be stubborn and kinda go the direction you're initially gonna go, or you could be receptive to that.
Speaker 2:Actually, there's something else. I'm gonna kind of go this other direction. And there's many, like, examples within science where people who made pretty profound discoveries, they did it because an experiment they intended to be for one purpose actually revealed a different result. So I think, like, one example is penicillin where it was that a slide, I believe, got kind of, like, contaminated with a bacteria, and the bacteria actually had these antibacterial properties. And only by being interested in that, you know, did did something happen.
Speaker 2:I think discovery of LSD was somewhat similar.
Speaker 1:I mean but those all seem to push on the, like, non intentionality front. The, like, you know, if you have enough slides in a lab, eventually, one will get contaminated. But you might need a ton of slides in the lab that all might have to be really different and in different conditions. And if you're just chasing one experiment I don't know if you've read Originals by, Adam Grant. I don't wanna overly worship it, but he has this really interesting notion about he writes about kind of originals, as in people like Albert Einstein and people who came up with new novel ideas.
Speaker 1:And he makes the claim that there are, like, 2 ways to be an original. 1 is someone like Einstein, who basically just had a bunch of received wisdom and challenged some of it. Basically, challenged the orthodoxy, didn't really do many experiments, just saw what experiments had already been done. He would you know, and then came up with a new theory to fit the evidence. And his kind of claim is, like, there are a ton of people who are successful in their twenties, especially, who have this pattern of, like, they didn't really do anything themselves.
Speaker 1:All they did was read. Here's some established wisdom. Think that there's a different pattern than what other people think. Challenge the orthodoxy, and that leads to some massive result. And then the other path is people later in life who generally succeed much later, like, in their fifties, sixties, who are more likely to be, like, Nobel Prize winners that basically did research that failed repeatedly over and over and over again, but interesting ways.
Speaker 1:And just by pursuing this, like, gradient of interestingness, they ultimately succeed. So I do wonder I don't know if you buy this. The the interesting anecdote here is, like, Einstein succeeded in his twenties, and then he in his own view, he, like, failed the rest of his life. This is just, like, whether or not you buy it. Like, the later in life, quantum mechanics becomes a thing, and he's like, no.
Speaker 1:I don't buy it. It violates the orthodoxy. And so a joke is kinda like he, as a young person, was able to receive the orthodoxy and challenge it. Later in life, he becomes the orthodoxy, and then he laments it about himself. And he's like, damn it.
Speaker 1:I wish I could have still challenged the orthodoxy. But he did not become this experimental physicist that just did a bunch of interesting experiments and found something. Whereas other people, Nobel Prize winners, just do new experiments every year that follow a trajectory. What I find interesting here is, like, is there a path for both? Like, is there actually legitimacy to people going after the first idea of just, like, having an opinion and just saying different stepping stones came into place than you think?
Speaker 1:You're wrong in your interpretation of the world. And then a very different strategy, which specifically requires a very long time horizon where you say, like, I'm just gonna try a lot of things. Do you think that there's actually this dichotomy? Or, like, how do you think about this? Yeah.
Speaker 2:I hadn't thought about that dichotomy in particular, but there's lots of different strategies that you can use to try to be innovative to follow stepping stones. And there's, like, a Pasteur quote or something like that. Chance favors the prepared mind. So that doesn't have to be, like, you're just wildly doing crazy stuff with no thoughts that, like, you know, sticking I think the example we use in the book is, like, sticking peanut butter in the microwave and just, you know, expecting, like, to discover the sequence of anti gravity or something. Like, that doesn't seem to work.
Speaker 2:And also that the very quality that might make you an innovator could also become a liability. Like, that, you know, contrarianism can be really useful to challenge received wisdom, and it can also you can also be wrong about being contrarian. It could be the received reason, you know, is often, sometimes right. So it's it's, like, delicate. There's no, I think, universal rules here, but it does seem like if there are there's lots of examples of scientists who innovated across multiple domains.
Speaker 2:It's clear that they're developing intuitions and maybe are receptive to, like, the things that are strange, like, where the received wisdom seems to have potential holes in it. And it's hard to give, like, universal prescriptions, but certainly, like, being aware of these strategies, is is is, like, useful, and you can, I think, attune more and more to, like, what is actually interesting or different?
Speaker 1:Yeah. I think on the, like, why us side, there's some interesting, like, perspective to be had where you actually do decide to some extent. Like, am I challenging an assumption about the world and trying to work off of someone else's data? Like, am I reading books and coming to a different conclusion? Or am I prepared to set off on a quest that's not gonna take me a year?
Speaker 1:It's gonna take me, like, 30 years, and I have no idea what's gonna succeed along that 30 year journey. Both are legitimate strategies. But if we want to we're both young people and, you know, talking about philosophies, we have to lament the kids these days. I I wonder if you think that there's too few people pursuing the latter strategy, though, which is, like, having a long time horizon and being ready to do experiments that are gonna fail?
Speaker 2:It's tough. I think that is the case that we overinvest in this short time horizon. That's one of the reasons we wrote the book was basically to highlight that there's increasing pressure in society, I think, from all angles, from the way that we're brought up in school, which is often just super competitive these days. And I think I was when it going through school when when during the the rise of that kind of competitive thing in the US, but it's I think it's gotten a lot worse, and we just do ourselves a disservice by having this kind of pressure, which actually doesn't really work even though it it's, like, societally a useful heuristic, I guess. In that, you know, people with really strong credentials, that's one way of kind of societally assigning, like, who is promising or something.
Speaker 2:And yet a lot of that education is is about kind of solving really particular known problems, which is different than when you hit the real world, which is, like, how do you solve unknown unknowns, and how do you discover things, and how do you innovate? And there's not really a rubric for success. So I do think that it's gotten worse. Like, even in science, science funding is is often very, like, objective driven where often you have to, like, say what you think is going to happen in the experiments to get funding.
Speaker 1:Which sort of defeats the purpose of the experiment a little bit.
Speaker 2:Yeah. Definitely. At least in the realm of basic science, definitely seems to be a little bit at odds with how, you know, great discoveries actually happen. And there's like a it's like there's some tension there. It's like you still again, you you want to say that you think something interesting is gonna happen through an experiment or through an approach, but you may not know exactly how it's gonna turn out.
Speaker 2:But the incentives are all basically that you have to get all the reviewers of your grant proposal to say that this is good or something. And so one thing we highlight in the book is that maybe what you actually want is something that splits the experts, something that, like, you get, like, strong yeses and strong nos, which means, like, there's something new here. We're not sure how it's gonna turn out. Let's fund that as opposed to something that gets, like, lukewarm yeses from everyone. Like, yeah, this is probably gonna work.
Speaker 2:It's probably something some kind of incremental progress. So that makes a lot of sense in basic, you know, basic science that you wanna change that. And maybe similar, you want that in entrepreneurship where you as a society, what we want is, like, a portfolio of diverse bets of companies, some of which could change the world and many of which will fail.
Speaker 1:Yeah. And I do think, like, what's really interesting is in in the startup world, you know, classically, you have a slide deck that starts with the problem and you proceed the problem, not the solution. But then there's something interesting about all the successful companies in the world that didn't start with the problem or the solution. They just started with something interesting and kind of said, like, the problem statement is so massive that we don't really need to talk about it. Like, you know, the PC wasn't created to solve a problem.
Speaker 1:It was created to solve, like, a ton of problems. And, you know, you're pitching it, and you're like, look. What's it gonna solve? Any of a 1000000000 things. But, like, realistically, our go to market strategy is gonna involve seeing who's willing to buy it now and then seeing who's willing to buy it next and seeing what capabilities emerge that we can sell after that, which is a very different strategy than saying, like, the problem that the computer sets out to solve is spreadsheets or something.
Speaker 1:It's not even a problem statement. That's already a solution. I've started to notice this. I don't know if you have this opinion, but the best VCs are the ones who are least inclined to care about the problem slide in, like, a slide deck. And I think it's just because you are chasing such a massive problem that's so obvious that it's more about, like, what interesting approach are you taking.
Speaker 1:Obviously, OpenAI is the epitome, but there are probably others where, like, can you really say, is it really those startups that have a slide deck that notice that there's this teeny tiny problem that no one else in the world has ever discovered and we're gonna chase that one. Like, those companies might succeed, but they're just not gonna succeed very interestingly. Right? Yeah.
Speaker 2:It seems there's lots of different strategies people can you know, can use to be to, you know, form successful companies and for VCs to identify those things. And, you know, maybe VCs are often looking for, like, the outsized payoffs, and they're, you know, okay with a lot of things that fail to get those outsized payoffs. And it makes sense then, yeah, that you would wanna fund companies maybe based on, you know, the founders themselves. Like, these people have a track record of finding interesting things and doing interesting things, having unique perspectives, maybe have successfully founded companies before that have succeeded. And sometimes different heuristics you could use.
Speaker 1:I mean so one big heuristic these days is, I mean, there are, like, 10 foundation model companies that do exactly the same thing. Right? Like, so I'm talking about OpenAI, Anthropic, Cohere, XAI now, inflection, character. I mean, there are just a bunch that the pitch is just like, well, you didn't invest in Anthropic, so you should invest in us. And I do wonder, like, is there something weird?
Speaker 1:Obviously, no market's super efficient, but, like, in venture, are investors just being dumb and getting more and more conversions? Like, have we lost actually the energy of investors saying, I wanna invest in, you know, a contrarian point of view?
Speaker 2:Yeah. I don't have my, you know, fingertips on exactly the pulse of all the investment landscape, and it could be just I think it's human nature in some sense to pile on, and you see this everywhere. So I I think VC wouldn't be unique in this. Like, within the research world, like, there's been just an incredible convergence to, like, transformers and and large models in general and and, you know, less resources towards whatever could see the next paradigm. So I think it's human nature, you know, to something new exciting has has come up, and you're just gonna kind of think, like, this is the end all be all.
Speaker 2:But I imagine some VCs still are going to be trying to to take outside bets. It's a risky endeavor sometimes, but often, yeah, it could pay out a lot. And it's true, like, if this has become this really competitive market where a lot of companies are more or less doing the same things and it's hard to differentiate, yeah, it could be a bad play.
Speaker 1:So talk about time horizons because I think that's, like, one of the key takeaways here. Basically, like, how do we think about time horizons? So time horizons have definitely gotten short culturally in Silicon Valley, at least, where, you know, we tend to believe that we're somewhere around 2 to 20 years away from AGI losing you know, we're all gonna lose our jobs. There's gonna be no more human work. There's no life as we know it.
Speaker 1:And if nothing else that creates some sense of like, do time horizons have to be crazy short. If you wanna start a company and you want it to succeed, it needs to succeed in the next 2 years so that you can make your money before the world ends. Are we making a mistake by getting such short time horizons? Or am I overly portraying this as a caricature?
Speaker 2:Certainly, like, a Silicon Valley kind of mindset. It's not universal and even among Silicon Valley, but it is this sense that every day there's a new model release and incredible new capabilities that, generative AI is developing, like So with videos and Anthropics, Claude 3, you know, kind of getting, you know, beyond GPT 4. And what was it recently that the automated software engineer that came out recently? I can't remember the name of it. Yeah.
Speaker 2:And so, I mean, there is this sense, like, okay. This this train is going pretty quick. And if it keeps going this speed, things will get, you know, pretty weird. At the same time, I guess, it's just like it's so hard to know when you're about to hit a brick wall in this space, and we may or may not. So I guess my sense, you know, just just my internal view, which I would actually temper if I was trying to to think about where things are actually likely to go.
Speaker 2:But my internal view would be, I imagine this will peter out at some point, and we'll need new innovations and new things to go forward. It's really hard to tell. That's just the nature of science, I guess. There's, like, some there's someone who is, like, listening to a lecture that the the the father of of, I think, like, physics was giving at the time saying, like, crazy, I think we'll ever, like, harness the power of the atom. And then, like, they'd listened to that lecture and got kinda pissed off.
Speaker 2:And that night, went on a long walk and, like, you know, came up with, like, okay. Actually, you can do it. So that even the, you know, established experts can be kind of incredibly wrong about what actually is possible next. HAI is scary to me. I mean, it's just kind of a weird thing that, as a society, we might just kind of, like, step into.
Speaker 2:But I guess, like, even if you were imagining kinda short timelines that we have, like, 2 years or something, or it's possible we have 2 years. Well, at some point, you just have to, like, live, you know, live your life. And if, like, that was happening, then either that you can really deeply have that impact the trajectory you're taking. Like, okay. Then I need to, you know, move quick and get my company.
Speaker 2:They're like, AGI is coming. Then after it arrives, everything will be different anyways. And will the money you made from a company actually be important in that world? Like, I don't know. Probably, I guess, maybe.
Speaker 2:It's just like a, I guess, like, a precipice beyond what you started to reason about.
Speaker 1:Yeah. Singularity, hence the name. Right? The point beyond which you can't reason. But I I also do tend to think I wonder if it's just AGI or something cultural.
Speaker 1:Like, because I think, you know, we have the same feeling as young people thinking about climate change that, like, the world's gonna end very soon. And I don't know. I I have to wonder if that's just, like, an old or cyclical thing that, like, this can't be the first or second or tenth time in history that we've all believed that the world's coming to an end very soon and that you have to act right now or else you're gonna run out of time and you have to do it by the time you turn 20 and you have to, you know, start a company worth you know? I wonder if for mental health, if nothing else, as well as perhaps innovation, success, and whatever, if we all kind of just were able to prime ourselves to think of a larger time horizon and basically think about, like, yes, all these capabilities are possible. The part I can't be certain about is the time horizon.
Speaker 1:I mean, do you have any tips, any thoughts on, like, what primes people to think in longer time horizons than, like, a month or a year?
Speaker 2:Yeah. I do think a lot of it comes down to the kinds of cultural, societal pressures that we put upon people these days, especially in different environments. But I imagine like Silicon Valley, New York City, like, lots of pressures. And there's something really strange about the modern world, I think, more and more so, which I think we don't often just take into account, which is we didn't evolve for this kind of world where we're getting blasted by cultural messages that are globally broadcast. You're seeing, you know, the most successful people in the world, like, their achievements are kind of, like, in your face all the time.
Speaker 2:And we are creatures of we quickly kind of imbibe things in a subtle level where I might start to think, like, you know, I'm awful unless I'm also an Elon Musk type or something. Like, I'm you know, if I if I'm not having a a $1,000,000,000 company, then I must be a loser or something, which is, like, crazy because, like, there's so much so much money in the world and and so many companies that will be that successful. And I I think, like, even Elon Musk has said, like, you don't wanna be me. My life is actually pretty, you know, stressful and crazy and and not super fun. But we've started to judge ourselves by these kinds of, like, crazy benchmarks.
Speaker 2:Like, one one way of thinking about it is that at some point in time, your first crush that you had on somebody was likely, you know, from somebody in your immediate neighborhood, you know, like someone you actually potentially could marry. And I think this comes from this book Super Stimuli, which I really like. But, actually, at some point in history, that changed. We're like actually, how many people had crushes on the Beatles, for example? And so it's, like, this kind of crazy maladaption where you're just imbibing something.
Speaker 2:Like, now the most attractive people in the world, I guess, you know, these musicians who are hugely talented are having an outsized share of our attention and, again, we didn't evolve for that. So now, like, all these crushes are kind of like, you get this misdirected energy. Like, what's the point of a crush if you can't actually, you know, get with that person? Like, it's not really serving any useful role. And similarly, when we get broadcast to the lives of the rich and the famous is a way that just doesn't play well with our kind of relative sense of status.
Speaker 2:And now I'm always bad because I'm I'm not, you know, I'm not Nobel Prize winner. I I don't have, like, $20,000,000. I'm not, you know, the most descendant genius in the world. And so I guess I'm just kinda making peace with that, like, understanding that that's the situation that we are we it's almost it's really hard to not take in those cultural messages. But I think if you can reflect on it and try at least to divorce yourself a little bit from it, like, spend a little bit of time away from social media and your phones and stuff and at least intellectually try to grasp, like, this is not healthy.
Speaker 2:I think that can help to lengthen your time horizons because you feel less of this pressure to become successful the next day, which is just not even how success works.
Speaker 1:Yeah. I love by the way, I love that imagery of kind of, like, you know, when you first have a crush, you know, like, 20 people. Like, not like, you know 20 people in the world or something. And then, yeah, suddenly, crushes move to this kind of like everyone has a crush on these 20 big celebrities. And, yeah, it's an interesting kind of, like, what I I wonder more generally, and maybe maybe AJI can make a big difference here.
Speaker 1:Like, there's some games where, like, you know, chess has now been solved by AI. Like, AI is now the best at chess or whatever else. We're like, can that actually unlock an opportunity where you no longer need to be the best chess player in the world because you'll never be as good as an AI. And as a result, you can be the best chess player in your Square block, and a lot more people can have that win because, like, I don't think it feels any better or worse to be the best chess player in the world, the best one in your Square block if you're not comparing globally. And if so, you know, you also just suddenly have a lot longer a time horizon because you don't feel like people are gonna trample all over you because you can actually see all the people you're competing with.
Speaker 2:Yeah. I think, like, that relates to the idea of, like, finding a niche or escaping competition, you know, finding some local place where you can get a sense of progress for sure. And there is I I think there's a real tension, though.
Speaker 1:I was just thinking on the competition side, this is where it comes together. Like, if you imagine pre COVID, you have, like, a science fair. I had a science fair in my school. I won my, 7th grade science fair. I loved my partner for it.
Speaker 1:I had a lot of fun. I never really thought about competing beyond that scale. I think, you know, there were, like, a 100 people as or 50 people as competing as there was not a large number, maybe less because we were in teams, maybe, like, 20 people. But the point was, like, it felt like I was competing against the whole world. You know?
Speaker 1:And what I wonder is, like, do those science fair type situations actually give more of an opportunity for diversity? Because every single one of our projects is different. There were no 2 people who worked on a similar project because there were so few groups participating. Suddenly, post COVID, you know, you're in 7th grade, and you're competing on a Zoom against a 100,000 other people in, like, the International Science Fair or something. And you're this one anonymous voice, and you get categorized as like, oh, you're one of the 832 people who built another Van de Graaff generator or something.
Speaker 1:That's what my brother built for his science fair. I wonder if there's actually something positive on the discovery side of having slightly less global, you know, competition where, like, sure. You can have the you can have the replication that, like you know, if you had groups of a 100 scientists and you only talked to those 100, would we actually be better off where, like, you have a huge amount of diversity and novelty among those 100 even if there's more repetition globally compared to where everyone is trying to be novel. And sort of, like part of it is, like, the feeling of failure. Like, if I tried to be novel in a science fair in 7th grade against every scientist, I would definitely fail.
Speaker 1:I definitely could not come up with a novel idea that no one else came up with. I would give up, and I would probably move on to another objective, like, just trying to win instead of being novel. But if I have a 100 people, I could definitely be novel. I don't know. How do you think about that?
Speaker 1:Is it better to be novel locally than globally, or is that just a really dumb thought?
Speaker 2:I don't think it's a dumb thought. I think that I guess there's a couple different factors. Like, one is the psychological factor. Right? So if you know you're competing against a 1000000 people, the pressure that you know to win that competition, you're gonna do something like crazy.
Speaker 2:That pressure can be self defeating, like you're saying. Like, it's hard to force yourself to have the biggest best idea of all. And so I do think that, yeah, in that case, having more local competition would allow you to let go of some of that psychological pressure in order to actually explore. And it could be, like, the the choice of emphasis also where you can imagine creating different kinds of contests. Like, even if it was a 1000000 people in this contest, you could say, actually, we're gonna we're also gonna make a 100000 awards where, like, awards are for different kinds of combinations of things, like, not just for, like, the most impressive thing, but the the most, it'd be hard to think of those 100000 categories, but that you're trying to elevate different parts of the space, different interesting things that could emerge from it.
Speaker 2:And that would have a different effect as opposed to, like, everyone trying to be the the best, which has a really
Speaker 1:It's just so hard to compete globally, though. Even if you have a 100,000 car like, I just think back to, like, when I was a kid, I came up with all these novel ideas or so I thought until because I I didn't have the ability at the time to Google stuff. So I didn't know, know, if you took any novel idea I came up with and Googled it, you'd a 100% find that someone else had already come up with it. But if you didn't Google it, it felt like it was novel, and then you kind of want to pursue that feeling more and you want to do more novel stuff. What I wonder is now, like, you know, is there it's something interesting here.
Speaker 1:Like, is there something culturally toxic to creativity, to novelty, from the feeling that you can't be novel because it's so hard to be novel, because you're, when you pursue absolute novelty across too many people, you fail compared to almost, like, pursue a little bit more local novelty. Like, it's okay to ignore some evidence. It's okay. Like, you know, you start a company, and you're like, I'm the only person who's ever worked on this. Then you go to the world leading expert on that particular field, and they're like, you're 10 companies with the exact same pitch.
Speaker 1:You're like, alright. Well, fuck. You know? Do I do it or not? And I I do wonder if some of those companies would be better off if the founder just either didn't know that or just said, like, okay.
Speaker 1:Well, you know what? It's 10 companies. That's not that many. Still counts as novel to me. You know?
Speaker 2:Yeah. I think there's a couple of different lenses there. Like, one is, like, when we're learning to be creative, it makes a whole lot of sense not to care about the global situation because, like, you're learning to create things. Even if someone else has made it, you're exercising that skill within yourself. And you couldn't expect, like, you know, if I'm, like, 10 years old that I'm necessarily able to create something that's, like, globally creative.
Speaker 2:But if it's creative to me and I'm actually cultivating this sense within myself, then to get support and positive feedback that, like, yeah, like, great job. Like, you are thinking of things that you're exercising this critical thinking. You're able to kinda go outside the box for yourself. Then it's also true on a larger scale where ideally, you would be kind of like starting to lean into kind of like more global creativity because you had you had the training wheels to get there. But still sometimes, like, some naivety, maybe is what I'm hearing you say is, like, if you're always leaning on the received experts, you know, some naivety can be useful.
Speaker 2:Like, you do something they wouldn't expect. You could question some assumptions they have, and the experts are definitely not always right. And so I think there's, like in that level, there's got a little bit of a balance. Like, you know, you don't wanna also be doing, like, crazy pseudoscience stuff because you have, like because you
Speaker 1:shut down all the experts. But I hear that. I at very least, it's like training creativity should involve some naivete. It should definitely involve ignoring Yeah. Global sense of creativity, at least to train it.
Speaker 1:Maybe when you run-in production, you should be global, maybe not. Who knows? But at least when you're training it, we should, like, allow kids the flexibility to, you know, actually be happy and successful in their creativity on a local scale even if it's not actually globally novel because it's fine. You're a kid. Like That's
Speaker 2:That's right. Yeah. Having environments where there's, like, a little bit of this training wheels where, yeah, you're not googling to see if anyone else has ever had this idea, but just working to cultivate that ability within yourself wherever you are. Like, so it's like a skill you can take, you know, just almost any setting, like, just to be able to think outside the box a little bit and try to see things from multiple perspectives and and so on, like, lots of these, you know, hacks for creativity. And there's a real skill that you can develop in, you know, questioning assumptions and looking beyond what's obvious.
Speaker 2:Even if other people have done that already, still those skills can then generalize to the the actual testing grounds that you might care about in your life.
Speaker 1:Yeah. That makes sense. I'm also thinking about, like, wrapping up more generally, I guess. This is a just a great conversation about creativity, novelty, achieving great things, stepping stones. Part of what I'm thinking about, you know, if I were actually an AI investor is and I guess the conclusion from your book, which is just about finding those alternative signals around, like, interestingness, around novelty, something other than just, you know, the signal that we're making progress in a very naive way that might be deceptive.
Speaker 1:One investor I really respect has talked to me about, like, his strategy on a personal level. Like, as an angel investor, his own money was like, I really just invest in companies I wanna stay involved in where I think they're interesting. I wanna see how it goes. I wanna learn more about the space. And I think I'm putting it together and wondering, like, what if, you know, it's a nice selfish strategy for yourself just to learn, but it actually could just be a good strategy as, like, you know, why bet on the normal signals?
Speaker 1:Maybe just having divergent signals, maybe just having a signal of, like, these are people I wanna hang out with, learn more about, see where this goes. Like, totally sounds like a legitimate investment thesis.
Speaker 2:Yeah. Yeah. I mean, like, of course, it'd be the edge cases of, like, you know, what I find interesting is has nothing to do with, like, you know, business success at all, then you could run into trouble. But in general, being attuned to what you think is different, interesting that other people will underappreciate. That might be a saying, like, I think this this company is undervalued.
Speaker 2:The opportunity is undervalued because other people don't see what I see and what makes this interesting with the potential it has or just the potential these people have to do new and hopefully valuable stuff.
Speaker 1:Yeah. And, I mean, I'm not an investor. I don't totally mind if these people losing their money. And more generally for AI research, like, I would be I'm much more incentivized to want people to just do divergent research because, like, I'm not financially or otherwise motivated by any of these. Like, I would love to have a ton of research happen in as many directions as possible, as diverse as possible because that's obviously what helps fuel success more globally and, you know, helps all of us.
Speaker 1:So I think, like, whether or not it's good there's there's the conversation about, like, what's good on an individual level for an individual or for a company to pursue for their own interest to actually maximize success for them. There's separately, like, what society should want. And I think what's partially interesting about your thesis is, like, the 2 are not actually that separate. Like, individuals should actually pursue interesting things for themselves, not just because if everyone did that, society would be way better off.
Speaker 2:That's right. Yeah. And, I mean, it'd be awesome if there were good safety nets so that society as a whole would reward the risk taking that often happens. And some of us will, you know, be more situated to take more or less risk based on our kind of circumstances. But, yeah, even if you're constrained, like, that, you know, you need to make a living and, you know, starting a company with a really radical thesis, you know, maybe isn't for you at the moment.
Speaker 2:So there may be ways to explore in in more local ways within your life, and the book also hopes to encourage wherever you you have risk or interest, if you can begin to explore that, then there could be, you know, profound benefits to yourself down the road.
Speaker 1:That's a great place to end. Thank you so much, Joel.
Speaker 2:Thank you, Daniel. Been fun.
Speaker 1:Thanks for coming on. That was a great conversation with Joel Lehman, author of Why Greatness Cannot Be Planned. This is Thinking Minds, AI and Philosophy, which is my side project because I miss being in philosophy full time. Anyway, if you have any ideas for a future conversation or any feedback for me, feel free to email me at daniel@slingshot.xyz, or reach out to me on LinkedIn. I'm Daniel Reid Khan.
Speaker 1:Thanks again.