Distilling the major events and challenges in the world of artificial intelligence and machine learning, from Thomas Krendl Gilbert and Nathan Lambert.
Tom [00:00:00]: Hi, Nate.
Nate [00:00:01]: Hey, Tom.
Tom [00:00:04]: I have an exciting announcement for you.
Nate [00:00:08]: What is that?
Tom [00:00:09]: The Retort was awarded the Nobel Prize for Literature this year. And the citation credits us with having created new poetic expressions for AI commentary within the great American podcast tradition.
Nate [00:00:32]: Wow. Congrats to us. When are we going to spend that? What is the award? $100,000? Then I can buy the... Okay, so I'm wearing the normal Metaglasses, which cost a few hundred bucks for fun. To scare Tom into recording him in another way. We're going to take random video throughout our recording. That's how we're going to do shorts. Low-tech shorts. I can buy the $10,000 ones that don't even sell. Then we're really going to be on top of our game. We're going to have show notes in our glasses.
Tom [00:01:01]: It's funny that your mind went immediately to the cash prize.
Nate [00:01:07]: Wild am I a scientist. I have more thoughts on that.
Tom [00:01:14]: But welcome to the Retort, I guess I'll say at this time.
Nate [00:01:18]: Yeah, I think we were due a model release therapy session. But we were traveling a bit too much to go through all the new Lama models. They had two models. I think we just kind of missed the first half of the fall with excitement and AI. But that's fine. That's life. It's probably good for people to miss parts of things. I'm well into my no Twitter at work and it's great. Waiting until we can delete Twitter. I actually can't if I need to win a Nobel Prize. Yeah. You told me that you have some takes ready. Oh, for sure.
Tom [00:01:57]: I have several takes on it. Yeah, I can jump into it. So. I kind of feel like this is a joke. I think it's a joke that so we should clarify, I guess, because who knows what's going to happen. So yesterday, the Nobel Prize for physics was awarded for some of the key people behind neural nets. That was a bit surprising. It was news to me that neural nets have improved our understanding of physics.
Nate [00:02:31]: Haven't we made no progress in physics since Einstein? Well, right.
Tom [00:02:37]: I mean, yeah, there's layers to this. Anyway, that was yesterday's news. We're recording this on Wednesday. Earlier today, it was announced that basically some of the key folks behind the protein folding stuff have won the Nobel Prize for chemistry. I admit also news to me that AlphaFold has changed our understanding of what proteins are or even what protein folding is. That seems a little bit more actually defensible to me than the award yesterday, but I'm not a chemist or a physicist, so I should qualify that.
Nate [00:03:22]: Wasn't there a peer-reviewed publication that came out that AlphaFold, most of the chemicals it predicted were actually not viable substances? I think there was an actual chemistry paper that debunked a large proportion of their claims. That's not surprising. I was talking about this the other day with people before the context of the Nobel Prize, so let's see if I can track down the link for the show notes. I don't know the schedule of the next awards.
Tom [00:03:49]: I don't think we've yet gotten awards announced for economics or physiology medicine. Well, I guess I outed us as winning literature already.
Nate [00:04:00]: Yeah, literature is the easy one.
Tom [00:04:02]: I think that there are several dimensions of this that are not good, but they're not good for science. I think they're not good for AI, frankly. I already wrote this briefly on LinkedIn. The result of this is funny, because what I wrote yesterday was basically them winning for physics sets a precedent. I think I even wrote within five years, basically every STEM Nobel is going to be fair game, really just for an AI person to win it.
Nate [00:04:43]: The thing is, like, ten years too early. 24 hours. That ended up being true. Do you actually have said this? I just posted it yesterday. Oh, yeah. Nice. We'll find that.
Tom [00:04:57]: So I was far too conservative in that timeline. No, I mean, look. Historically, you didn't give out Nobel Prizes to people who invented engineering tools that were useful for conducting, you know, that just made it easier or more efficient or, frankly, even just less costly to conduct scientific experiments. And I think even that's generous with respect to these awards for physics or chemistry. I mean, again, the context here, Alfred Nobel famously, right? Invented the prizes. In some ways, because he felt responsible and a bit guilty about his invention of dynamite.
Nate [00:05:49]: Right.
Tom [00:05:50]: So he invented dynamite. He that obviously was, you know, weaponized in ways, to be fair, he didn't intend. That was not impactful.
Nate [00:05:58]: Yeah, it was.
Tom [00:05:59]: Yes. In more ways than one, it was impactful. He was concerned about the type of impact that that invention was having or will have. And so he, I believe in his will, is the story specified that some portion of his, his inheritance should be dedicated to these annual prizes. And his will actually specifies the categories. So these aren't things that have changed, I believe, since then. I feel like what's happened in the last now 48 hours is almost as absurd as if Alfred Nobel himself had invented dynamite. And then he just won a Nobel Prize for that.
Tom [00:06:43]: Like that's where it's that level of circularity now where just because we're building things that are faster predicting things in highly artificial settings. So they're not really predicting real things, they're just predicting simulated things that you win the most prestigious prize in the physical sciences. As a result of doing that, I think it's, I don't know, I don't think it's going to age well.
Nate [00:07:16]: I don't. I think it's going to age well in the way that it's reflected in more things, but I don't think it's going to age well in people that are trained in history, like in the histories and other broader sciences are going to appreciate it. It's just like the fact that the internet is consuming all of our world, including our physical world. It's like these stupid glasses I'm wearing like they could just like spy on people like it's a very physical thing. But it's the it's the intrusion of the physical by the digital.
Nate [00:07:47]: It's sad.
Tom [00:07:49]: Yeah, I, it is, it is certainly the intrusion of on the physical from the digital. And the fact and I guess I guess it's worth saying yeah I mean Nobel himself was. He was concerned when he was concerned about was the impact on society, the impact on humanity, the impact on people right. So these, the context of these awards was never just to give whoever the most brilliant scientists were at any given moment in time, arbitrarily prestigious awards. It was met, there was this context of like you've advanced our collective understanding of this scientific endeavor, this. And we, I mean, literally the, I guess the king of Sweden, but I think more broadly, we have the community of scientists, and I think even by extension we the human community honor that we honor that by giving you this prize. And so part of the yes you do get money which is what Nate immediately thought of when when I warned it to us. You also get, you know, basically untouchable status in your community, and you get licensed to just do and say things in that community and beyond it, that are otherwise not available to people.
Nate [00:09:06]: Can we get into what Jeffrey Hinton has said, and did you watch his and I didn't actually watch his acceptance speech and speeches thereafter but there's definitely some controversy in there. It's the acceptance speech I heard the TLDR was like, thanks, you're welcome and good luck with the AI doom. And then later on in commentary, he literally said that he is proud that his student fired Sam Altman. Like, talk about further intensification of the platform it's like to immediately like that it's fine. The end safety thing I think is more okay but it is a somber tone on the moment, I don't think that I don't agree with his views but it's worth noting that that platform he took a moment to discuss that, and I don't, but the other stuff is just shows that he's probably in too deep.
Tom [00:10:04]: I didn't even see those two quotes that you named, which is funny. The one that I saw was, I think just in the New York Times article announcing it, that Hinton said, I don't have the quote in front of me, but what he communicated was, he was expressing sheepishness and saying like, I don't know if it makes sense for, you know, the work that I'm getting this citation for to be awarded for physics, but there's also no Nobel Prize for AI. So he was just sort of rationalizing it in that way. But yeah, Jeffrey Hinton, he's obviously a really key figure. Yeah, I mean, Godfather of AI and all that stuff. I mean, a lot of that, that a lot of that stuff is marketing. I mean, he's important, I think, because, and maybe this is the more fair way to say it, you know, there's that famous quote, I believe from Heisenberg, talking about physics, where he has said that, it's a famous quote, it's something like theoretical physics has advanced one feud roll at a time. And what he meant by that was, the next generation basically isn't allowed to change the paradigm of what physics can be, until their advisors let them. And advisors are very, very unlikely to let them do that. And so you basically have to wait for your advisors to be incapacitated or dead, in order for you to broaden the playing field. And I think what Heisenberg meant by that was, that's the reason why most or almost all of the founders of quantum theory were in their early 20s. Or even in their late teens, when they did this work, and it's not because in some kind of objective sense, they had more neural pathways, or brain cells or something like that. It was it was a sociological problem. And they just happened to be the generation that was finally in a position to make and publish those sorts of breakthroughs. So I think there's a parallel to Hinton himself, just because he is sort of the oldest of us, who's still in the game. He was the one of the first people to really develop this idea, the idea of what neural nets could be. He was one of the first to assert the generalizability of that. And, you know, he lived through at least one, maybe even a few winters, basically manning the battlements of that position. And that that's significant, it's worthy of, of, you know, honoring him for that, because of the number, I mean, you alluded to this, but when you win a prize like this, it's not really just a reflection on you, it's a reflection on your students, it's a reflection on your students, students, it's a reflection on your lab, it's a reflection on all of the projects, and papers and deliverables that your initial pathway is made possible, right? So there's certainly a contribution there. I don't think it's a physics contribution. And I guess Nate, as you alluded to, it's already clear that many of the most significant forms of that contribution are shitty.
Nate [00:13:36]: Yeah, I'm, I don't think I'll be not, I don't think I'll ever not be disappointed about this.
Tom [00:13:44]: Just tell me how you're feeling. I mean, I've said I've expressed how I'm feeling. How are you? How do you?
Nate [00:13:48]: It's pretty similar. It's just jarring. It's like, why, like, this feels irrelevant. It seems like a loophole in the simulation and just furthering the bubble of like, AI being all things. I think I was like, I was in DC very briefly, again, at this public AI thing, which is a like, Mozilla is doing a lot of coalition building around the term public AI to differentiate from private sector AI. And I still find it in that circles. It's like, I don't know. It's like the difference between generative AI and other types of AI, like predictive AI, like predictive policing, and all these things we've known to be bad is not that well articulated. And the fact that, like, we are talking about AI right now, because of rail, really narrow thing is happening, but all these very broad attributions to AI just keep piling up. It's like, this Nobel Prize, it's just all of it's all generally feel like all the general AI big thing, but it's really like a really narrow part of AI that is changing so fast, like the algorithms behind Facebook and TikTok are not changing substantially, they are AI. But I just think that we need to be narrower in focus and clearer on what we're doing, which is like, we are trying to get out of ahead of understanding these very big models. So the pace of progress is so high. And that's what most of our like, regulatory attention and like, we should like, the Nobel Prize, they should have just waited a couple years. Like, are you freaking kidding me? Like, especially with the alpha fold one. That's so new. Yeah, it's crediting them and just make everything worse.
Tom [00:15:25]: That's right. The way I would put it is, I can actually imagine a situation where the Nobels could consecrate advances in fundamental sciences that were enabled by, you know, new AI techniques or technologies.
Nate [00:15:43]: And it's coming. It's coming. I think we just need to wait a few years.
Tom [00:15:47]: Well, yeah. I think the point though, is like, it has to happen first. It has to happen. And then they can award it for it happening. Because until it happens, we don't really qualitatively have the language to describe the significance of the findings. Right?
Nate [00:16:07]: It's almost like the same old, same old nonsense of AI discourses trickled over into the real world. And that's just bad. It's like Schmidhuber, you know, his vibe, which is claiming that he actually did everything first in deep learning and that everyone's not citing him enough. He has like a multi hundred word tweet on why he should have won the Nobel Prize, which is just classic AI. It's just classic Schmidhuber. It's like the start of it. The Nobel Prize in physics 2024 for Hopfield and Hinton rewards plagiarism and incorrect attribution in computer science. This is what Schmidhuber has been doing for years. It's like, we don't, I don't want the rest of the world to be like, peeping in behind the curtain of our mass chaos, because we're not ready for it. And it's just going to make this bubble.
Tom [00:16:54]: It's coming at a time when, you know, Hinton himself had, of course, this, we've probably discussed it here before. I mean, this complete rebrand of being a convert to worries about existential risk, AI safety stuff. So yeah, that that context bleeds in. And so that's, that's a different dimension here is that this Nobel coming what it does, at least for him, sort of by proxy provides a lot of validation to that network of concerns, because it consecrates the significance of what Hinton's contributions have been and will be. And that that changes. I mean, unless I think what's what would be the, what would be the parallel, I guess, would be something like Timnit Gebru won the Nobel Peace Prize for that. Sorry, that might age in a weird way. And she actually does win it. I don't know. And yeah, she wins the citation or something like for raising awareness and raising the alarm bell of the risk of societal scale, AI infrastructure or something like that. I don't know that this is
Nate [00:18:14]: I'm gonna go on the record and say that people like Hinton, Lacoon, Bengio, even like Fei-Fei Li, like these people are not the ones that had the ideas driving what the ride we are currently on. And I don't put that much credence on their expectations of what general AI is going given that they tend to not, they are not the people that had the insights to do what is driving what was happening now. It's like the people I like, anthropic folks and deep mind folks that were the original people in scaling. Like those people have been bought into what is happening now for years before Chad TPT, and they have seen it coming, and they've been building their worldview around it. So like those are the people that have the most credibility in terms of near term forecasting, and a lot of them have analogous and similar things to what Hinton has to say. But it's just like, we need to give the microphone to the people that are closer.
Tom [00:19:13]: So another dimension of this is that, and this is a theme of our conversations, right? It's sort of like, well, what is science really? And does AI count as a member of it? I've been on the record saying that it does not. I think I still basically feel that way. And so I think it really does not speak well to the Nobel that at least two and God knows what will happen in the next several days. But, you know, at least two of their of their scientific awards have gone to people who honestly, they're not scientists. They may carry the title of computer scientists, but they're not scientists in the sense that the Nobel was founded to consecrate. And that's fine. Okay, that's fine. It doesn't. It's not I don't mean that pejoratively. But I think the issue is, the Nobel was created and was awarded and remains awarded in a context where science itself has changed repeatedly, right? Since the early 20th century. I mean, you made the joke, Einstein was that Einstein, the last person who, who advanced physics, I would have to, I mean, I'm saying no, right? I mean, we had quantum. Yeah, we had a lot of fundamental contributions after I look, Einstein was a baller. Nope, no, no, no problems with him. We only just a few years ago confirmed gravity. And that's something he predicted back in 1970. Right, I guess as a as a follow up, it's it follows naturally from general relativity that you expect something like that. In fact, it took us over something like a century to even confirm that idea. So he's a baller, but there were still other scientists, physicists after him, who qualitatively advanced other areas of physics and also relations of areas of physics that related to build on his contributions. But the issue is that even the culture of science in that classical sense, has evolved to be, you know, what we call big science now, right? So there's enterprises like CERN, the Large Hadron Collider, where, you know, he can look at the paper that announces the discovery or the the confirmation of the Higgs boson. The joke is that I think there's something on the over 1000s of co authors of that paper. They had to come with some special formatting for the paper, because you can't, like, how do you possibly list that many names?
Nate [00:21:41]: Yeah, it's like when you cite the Gemini paper, if you do it correctly, you have the citation is over a page long. Yeah. And so I think that the one way to describe this, you know, we can discuss this. I mean, I'm kind of free associating right now. I think that's like the industrial revolution is not like industrial revolution is the right word. But it's a that sort of application where you realize that you can make mechanical and follow some really clear guidelines for things that previously were scientific. You can just turn it out. So yes, you can scale
Tom [00:22:19]: on was a brilliant motherfucker. Right. I mean, all assert that I'm comfortable saying that statement. Should you want to Nobel Prize? I don't think so. Not according to the categories of what the Nobel Awards.
Nate [00:22:34]: Wait, wait, who was it? Who did you say? I didn't want
Tom [00:22:36]: Right. Yeah. So well, actually, that's a bit of a thing. It's almost like a version. It's like the 18th century version of the social network, like who invented the steam engine, because we've learned a few, there's a few different people who you can credit with different parts of that. But James Watt is often cited as at least a synthesizer, if not sort of like the founding father, I guess, of the industrial revolution, because because of what that his approach to this dimension, you know, enabled and also the theory that accompanies it. Right. But it's still basically applied engineering. I mean, I hate to I know that sounds dismissive. And I'm a bit self conscious saying the old joke, if you're a physicist is engineering is just applied physics.
Nate [00:23:18]: This is kind of what not for the TLBR is like science is now engineering, or engineering is applicable.
Tom [00:23:24]: I think this is what's hard. I think this is exactly so this is exactly what the Nobel committee in the last 48 hours has kind of thrown down that gauntlet and said, science used to be Einstein, then it was big science now basically engineering. It's fair game for what science can do.
Nate [00:23:44]: Llamas for is going to win the Nobel Prize and
Tom [00:23:47]: seriously, that's now they've completely opened up that floodgate, which look, if that's, look, it's I mean, when you consecrate something like this, part of the power of consecration is not just that you're confirming that a new thing fits into an old bucket category, but you're asserting right, you're you're playing with our sense of where the boundaries of that category are. And that was referenced yesterday. I'm sure also today. Someone was interviewed about the Nobel for physics yesterday saying, Well, you know, the boundaries of physics have changed over time. That's an interesting statement is that we're actually true, like physics, I think it's important that we accept that physics itself as an objective phenomenon, like property of physical reality is not changed over time. Right? The four fundamental laws, all that stuff that doesn't change. And in fact, it did it, there couldn't be a science based on it. Because then you couldn't have reproducibility, you couldn't do experiments, you couldn't distinguish dependent independent variables. So in order for something to qualify as a science in that way, it has to actually not change the phenomena change the new material, not the underlying principles, though, right, they can be discovered, but they don't change. So when he says that, it begs the question, which you just you just named it, it was implicit in that, which is, engineering, high quality, influential engineering at scale, that has an indeterminate number of applications to prior domains is not only basically qualifies as science, but it's enough that it is able to basically take the the ultimate award for what science is even recognized as being what science is allowed to be. Has now been captured, or contested entirely by that perspective. And I think the result of that is I wrote this, I think, over time, we'll think less of the Nobel.
Nate [00:25:53]: While we think, well, it's a natural thing to be phased out, because we're still so early in the arc of technology that it may not be that surprising to me that most of the things that are so influential are influential, or that you can only do them by having really complex processes that is not look exactly what science used to look like. It's not like finishing science is just changing science. Science is certainly always changing and allowed to change and how it's like science is like a community. It's like a shopping around of ideas. So it is always a natural for it to shift.
Tom [00:26:31]: It's always natural for it to shift. Of course, scientists do experiments, they do research, they are at least supposed to update their own views and hypotheses in response to new information. That's part of what makes modern science so important to how we live today, is that we're responsive to that, to those changes, we don't have a static view of the cosmos or of ourselves in the cosmos, right? But what's important about that, though, is that when science changes, it changes for reasons that are internally consistent, right? It's getting closer to some fundamental grasp of the key principles or issues. And that's what I think is missing from these two awards this week, is that literally Alpha Fold, it's very impressive, it's speeding up the rate at which you can calculate protein folding. But that's my understanding is that that's basically it. That's kind of all it's...
Nate [00:27:40]: Yeah.
Tom [00:27:42]: So that either means that the state of chemistry is so poor, that literally nobody else made any kind of material contribution to our understanding of it, such that we just gave it to these people who help it move a little bit faster. Or it means that our understanding of what chemistry is, is changing. It's now moving in the direction of, you know, just this computational application, the application of compute, you know, to it.
Nate [00:28:14]: Where is that going? I don't know. The cynical view is that it's just an attempt to join into the AI hype by people who were pulled into the Nobel Decision Committee and not born into it. It's like the generational problem. Well, that's, I think, what's sad. I think that's... I don't even think that's cynical. I think that's just what's going on. It would be cynical if you were only arguing that the Nobel Committee is, like, strictly self-interested and that there's nothing else going on.
Tom [00:28:41]: I think that that's just what's going on. This is the Nobel Committee is trying to stay relevant in a time of incredible hype. But that's, I think, I do think that's sad, because that lessens the Nobel. The nature of the Nobel is that it's meant to be the summit. It's meant to be the pinnacle. It's not meant to be something that dominates a news cycle, or something that we pay reference to once a year, just because it's a meme or something like that. It's not. I mean, now maybe it is. They're trying to cash in relevance. They're exchanging relevance for prestige, I think.
Nate [00:29:22]: Well, this is one of the things I say over and over. It's like a lot harder to build a reputation than it is to spend it. And they're spending, where they think they're actually building, but they're just spending. And I don't know why you would spend your reputation as Nobel to be included in AI's discussion, but that is what they have done. They're now part of it. They paid reputation to become part of this AI nonsense. It's like the worst trade deal of all trade deals.
Tom [00:29:48]: It doesn't really seem like Hinton cares that much. I don't know if I've seen any quotes from Hasabis or those folks about... I don't know. I definitely care. I'm sure I would expect more. I mean, not because I know them, but just because I've seen the kinds of statements that they've made before about these things. I mean, DeepMind's early years were very much about demonstrating... It's very on brand for that. It's quite the coup, right? Because much of what DeepMind has done, since its inception, is test the limits of...
Nate [00:30:22]: This is almost like DeepMind 1. I'm not that surprised that DeepMind was able to brainwash somebody into this. I'm more surprised that the random neural net paper from whatever the 80s or 90s managed to break through into it.
Tom [00:30:34]: Yesterday's was weirder.
Nate [00:30:36]: Wow. Shouldn't they just give it to whoever invented the Perceptron? How far back are you willing to go? Yeah, I think... What is the Hopfields Network actual innovation? I don't even know what that paper's deal is. That's an issue. I forget. That's one of those facts I used to know, but I've forgotten.
Tom [00:30:58]: You have to be alive. The Nobel has these... You have to be alive. You have to be... I mean, it's meant to be an individual person or a collaborative team that's small. I forget if the will specifies that explicitly. But yeah, there are these restrictions. So yeah, I think maybe this is what's going on, is that the committee feels like its hands are increasingly tied by the restrictions, I mean, I guess, in the will and just sort of what the Nobel is allowed to be. And so they maybe have no choice but to exchange their prestige for relevance. But what a wagon to hitch yourself to, I guess.
Nate [00:31:44]: Not too late to hop on the AI train or choo-chooing somewhere undetermined and along for the ride. Yeah, good thing SB 1047 didn't get passed. Otherwise, there would be no Nobel laureates award in California.
Tom [00:32:00]: I mean, Gavin could win the Nobel Prize for peace, I guess, for, I don't know, not signing the bill. That's the thing. They've completely opened the pike to any ability to tell a story about a citation for AI, which, you know, that's just, I guess that's just the ride that we're on. We'll see what happens. I don't even know what that is. Yeah, I don't think they've announced physiology or medicine or economics, or peace or literature, I think, right?
Nate [00:32:32]: Oh, if peace gets one, that's so literally, we're talking, I realized, Oh, God, that's may well be what's about to happen. It's gonna be a like, just as the ethics, in fact, community starts to get like a renaissance or a rebirth from Emily and Timnit finally removing themselves from the field. It's like a four year gap. They're finally starting to do like language model research and interface with AI safety research. And I was talking to Seth Lazar and he's like, Yeah, fact seems to be better again. And they changed their sponsorship structure. I was like, Oh, they finally overcame. The cultural brouhaha that was misinterpreting the stochastics parents paper into saying we shouldn't work on language models. If that were to go and give it to one of those figures that used to be the leaders in the field that now are like, they don't go to fact anymore. They like refuse. If they get one of these awards, their fields doomed again. Yeah. Go straight to jail, don't pass go.
Tom [00:33:43]: Well, right. I mean, actually, now that I say that, I don't know if it's worth speculating, there could be a coup opportunity for the Nobel if they decide that they're the arbiters of that conversation.
Tom [00:33:58]: The whole safety versus ethics brouhaha. Right. Wow, that that would be a power move on their part. I don't think it would go well. But I would have to applaud. That would be that would be a way for them to exchange their prestige for relevance in a highly, highly polarized moment. I'm not sure what the implications of that would be. But maybe that's not their job. And they might just do it. I guess we'll see. I might even put a bed down. I might even put even odds up. I don't know. Yeah, that's pretty messy. That's pretty messy. Anything else? What else is going on? There's a lot of other stuff we could touch on.
Nate [00:34:44]: I still find the whole I find the whole AI safety culture thing pretty funny still. Now they keep remaking new companies and how open AI is definitely just a consumer tech company now. But open AI is like fundamental tension for their whole existence. And probably what drove them was this tension between safety and product and scaling and safety and all this stuff. But it seems like there's a consolidation. So I almost think that their magic will be lost because of this cultural uniformity that they now have. It's, yeah, I guess my reflection on that recently has been, I mean, it's a remarkable moment, right? Because you've seen the memes of like how it started, how it's going with like the entire C suite is now on except for Sam Altman.
Tom [00:35:28]: You know, this is happening in a larger horizon where most of the biggest of the big tech companies are now actively being, you know, litigated or pursued by whether it's DOJ or the FTC, you know, basically, different varieties of antitrust violations. I think that what's, if we cut a bit deeper, what's going on is the last 20-25 years, not really of AI, but of tech development has been around trying to create, I mean, we call them moats and things like that. But really, it's been around, you know, trying to create, you know, a safe environment for people to be able to do what they want to do. And now, you know, government moves slowly. But that seems to be beginning to bite those companies in the ass in significant ways. I mean, now it's, they're actually seriously looking at claiming, as part of their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, their, you know, part of their student against Google, that Google should be broken up, like this is, this would this would be transformative. And so we may actually consciously, I'll make them even hopeful now, we actually realistically could end up in a a much more competitive ecosystem, than what we've seen for the past 25 years. Because whether it's through the, I don't know what word to use to describe what OpenAI is kind of trying to do right now, but just like the leveraging of other cultures outside of their company and them trying to create their own companies in the wake of that, combined with the growing skepticism of just the biggest and the big tech vendors and the prospect that they could even be forcibly broken up right, means that we can be left unquestioned.
Nate [00:37:27]: It all comes down to like how much scaling actually works and how much you need to scale to serve profitable products, which is just such a whole, we don't know. Like we don't know what's gonna happen if scaling is actually gonna make it so things become more profitable and you can make an ecosystem around it. Or if it's just, you use big models to train really small models and people just make open source stuff and all this investment is not that needed to maintain a moat or anything. And I don't, we're so early into the adoption of AI that there's no real way to say definitely one type of approach is gonna win. Like there's serious trade-offs and we don't know the most fundamental influences in these trade-offs. Like the fundamental influence is like, does scaling make things meaningfully better in ways that you can't replicate? Yes, no. And the trade-off is big, expensive, but like if that is a no, like no difference, therefore your trade-off of going big is just like, you're-
Tom [00:38:32]: Well, I think the point is there are going to be limits to that as a value proposition. We don't know exactly where the limits are, but there are going to be limits there. And ultimately you're right, it's about the market. Scaling matters, we've seen that, but it only really matters in relationship with can you actually move product? Can you actually create things that people will pay to use and how many people will pay how much to use those things, how often, right, on what terms? And I'm sorry to disappoint the scaling law people if they really just enjoy dissociating from that reality, but that's the reality. And my view is that that's the overwhelming reason why OpenAI has been the vanguard, has been almost hegemonic so far is, I honestly think it has much less to do with scaling laws and much more to do with product. They've just moved product.
Nate [00:39:33]: Their product is better. It's like their developer API is better, even though Anthropic is pretty close to them because Anthropic's model is slightly better.
Tom [00:39:40]: I just mean that in terms of strategy. I'm actually not even talking about the technology right now. They were just literally the first ones who were willing to do chat GPT, right? We just, it's so easy to forget that moment where it's like the biggest vendors, the tech vendors, were sitting on language models and they weren't releasing it publicly because understandably they were worried that it was a Pandora's box. And OpenAI just was willing to be the first to make a product out of it and then to keep moving on it. I mean, they just, they've quite, they've kept up the momentum around that. I'm not sure if that strategy is gonna keep working in exactly the way that they've been doing it, but they've shown that there is at least the prospect of a huge market for this. And the question there is really not gonna depend, it's not gonna depend overwhelmingly on scaling. And it's likely to have lots of different market segments. There might be fragmentation. There are gonna be different applications. The search for generality, I'm sure, will continue, but my sense is that it's just gonna become, even for researchers, less important than it's been in the last several years because it's felt like it's been almost everything in the last several years.
Nate [00:40:57]: This one makes me so sad is I think this is such a long-term thing. This integration of AI products is order of decade timescale, but all the capital movements are orders of couple years. And you're really betting on the shape of this adoption curve being fast.
Nate [00:41:20]: My CEO isn't on the line. I'm not on the line.
Nate [00:41:25]: It's a risk.
Tom [00:41:27]: It's an entrepreneurial risk, yeah. And I mean, there's a lot of people trying to commandeer it, shape it, tell stories about it. It becomes self-perpetuating, right?
Nate [00:41:37]: Because the more compelling the story is,
Tom [00:41:39]: the more real you can hope to make the story
Nate [00:41:41]: and if you get it by investment, then you can actually build the product to actually begin to try to make it real.
Tom [00:41:47]: This is the way BC has always operated. It has always thought itself that way.
Nate [00:41:51]: They're not predicting the future, they're making it.
Tom [00:41:54]: Part of making it is building it and predicting where to create it first. And that's that ecosystem.
Tom [00:42:03]: Yeah, it remains interesting. It's remarkable, even now, I feel like every once in a while, I want to come up for air and be like, we're still in this hype cycle. It's an incredible thing. It's been, I mean, now it's coming up on, actually, I think now it's still technically less than two years since Chapter 2.
Nate [00:42:21]: And yeah. Good to have the number of months in your head. We're like 21, almost 21 months. Or no, almost 23.
Tom [00:42:30]: It was November 30th, I think was the day it launched. So, and we've seen so, I mean, we've seen so many growing pains in that time.
Nate [00:42:42]: It's a remarkable thing.
Tom [00:42:43]: It's kind of become more exciting to me as I realize, I mean, yeah, if even the Nobel is no longer sacred, then that kind of means nothing is anymore on several different sides of this. And so, but the issue remains that we're living in the Wild West and we just, that's not sustainable. Sooner or later, the Wild West does have to turn into governance, in a not just abstract way, but literally intricate ways. The question is, how much of that is market? How much of that is law? How much of that is product? Because I think products are conditioning markets.
Nate [00:43:23]: We'll have to, we'll have to see.
Nate [00:43:29]: Yeah. Start getting your acceptance speech for the Nobel done. Oh, I don't know.
Tom [00:43:36]: We'd have to pull a Dylan, I think. He didn't even go when he was awarded it. When Bob DeWitt won a Nobel Prize for Literature, he didn't even go. He did eventually, he had to submit a speech. He was required to, and so they kind of bullied him about it. He did write it, but he just didn't show up, which is, it's not surprising if you've been following that guy for decades, but it's, I don't think of some appropriate way to send a signal to the note, to the committee.
Nate [00:44:05]: Wow. Amazing. Okay.
Tom [00:44:09]: I leave it here and God knows where we'll be in a couple of weeks. We'll see.
Nate [00:44:14]: Yeah. Yeah. Good to see you. Thanks for listening. Like, email, rate, rant. You should complain. We invite all of you. We'll admit. And we'll include them in our Nobel speech
Tom [00:44:29]: if we do talk about it.
Nate [00:44:31]: Yeah. The tortoise, like those Patrion scroll pages. We would like to thank, and you just go and end up there for five minutes. I didn't have a YouTube video. Yeah. Okay. Well, good night out for that. Bye for now, guys. Bye bye.