NET Society

This week on Net Society, we’re joined by special guest Jeremy Nixon to dig into what today’s AI boom actually is and what it is not. The conversation opens with Jeremy’s path through early autonomy and self-driving, and why that era made it impossible to dismiss machine intelligence as hype. From there, the group zooms out into bigger questions about intelligence itself, contrasting “alien” intelligence with collective intelligence, and treating LLMs less like minds and more like powerful simulators. The episode then moves into creativity, measurement, and the real constraint on progress, which is not generating ideas but selecting and validating them. In the second half, the discussion turns to how LLMs were built, why major labs and incumbents made different bets, and what that says about institutional risk and ambition. The episode closes with a sharp look at AI apocalypse culture, the moral frameworks that grew around it, and how open models, game theory, and product reality collide with the temptation to turn AI into a new kind of religion.

Mentioned in the episode
Special Guest Jeremy Nixon https://x.com/JvNixon
AGI House https://x.com/agihousesf
Thiel on Progress and Stagnation https://www.lesswrong.com/posts/Xqcorq5EyJBpZcCrN/thiel-on-progress-and-stagnation

Show & Hosts
Net Society: https://x.com/net__society
Aaron Wright: https://x.com/awrigh01
Chris F: https://x.com/ChrisF_0x
Derek Edwards: https://x.com/derekedws
Priyanka Desai: https://x.com/pridesai

Production & Marketing
Editor: https://x.com/0xFnkl
Social: https://x.com/v_kirra

What is NET Society?

NET Society is unraveling the latest in digital art, crypto, AI, and tech. Join us for fresh insights and bold perspectives as we tap into wild, thought-provoking conversations. By: Derek Edwards (glitch marfa / collab+currency), Chris Furlong (starholder, LAO + Flamingo DAO), and Aaaron Wright & Priyanka Desai (Tribute Labs)

00;00;16;01 - 00;00;38;05
Pri
Hey, welcome to net society. We have a special guest today, Jeremy Nixon. We finally hung out in person in San Francisco a few weeks back and had an amazing conversation. So I thought we'd bring him on the pod lot to talk about. Just to give you some background, maybe, Jerry, please fill in the details after Jeremy is a CEO and founder of an amazing company called infinity.

00;00;38;08 - 00;00;59;15
Pri
Founder of AGI House has worked on many different technologies that are, I think some of the just the most emerging, important corners of technology, including autonomous vehicles and in various aspect of his his research. I'll I'll let you fill that in. Jeremy. But yeah, please welcome to necessary. It's so great to have you praying.

00;00;59;16 - 00;01;28;23
Jeremy
Thanks for having me on. Great to be here. I have a sense that, yeah, autonomous vehicles are a phenomenal example of how we are able to deeply transform, a central part of society. It's a very physical and world, visceral proof that artificial intelligence has arrived. And when you see it take control of the wheel, you can't deny in the same sense that when you're sort of experiencing it's something on the internet or something that just seems like text, you know, it doesn't feel real.

00;01;28;23 - 00;01;55;00
Jeremy
You can't deny the reality of the self-driving car experience. And so when I was at Google Brain and inventing novel uncertainty estimation tools, some of which made it into Waymo, I had a sense that these, these ideas, you know, around superintelligence, which had kind of been grounded in, sort of renegade technologists and philosophers, you know, in the 2020 tens were beginning to become reality.

00;01;55;04 - 00;02;26;19
Jeremy
And and now we have these systems which are, in my opinion, the first proof point that we're capable of inventing radical identity, transforming existential technologies. And so I do see it, my house as this ground zero, the center for Creative Invention and Infinity as the next step into an age of automated research in which umpteen technologies and scientific discoveries are going to emerge from this new category of alien mind that we've managed to invent.

00;02;26;21 - 00;02;27;11
Pri
Incredible.

00;02;27;12 - 00;02;52;01
Aaron
So do you think it's actually alien, Jeremy, or is it just like a synthesis of, like, our collective intelligence? Like, that's something I've always kind of thought about. Like, sometimes you definitely use these systems and you're like, this feels like a foreign like being. But other times I can't just tell if it's just like the, the massive, like, collective intelligence that's been jammed into these systems and it's just like, impressive and kind of its own.

00;02;52;01 - 00;02;55;01
Aaron
Right. I be kind of curious what you think there.

00;02;55;03 - 00;03;26;25
Jeremy
Yeah. Mathematically, the pretext task that allows for the creation of large language models is next token prediction. And that practically makes them simulators of whatever context they happen to be internalizing. And so if you choose to internalize the totality of text on the internet, everything from, you know, Reddit to a body of books, you know, point to the written corpus of human knowledge, then you will create a base model which by default simulates the statistical distribution of that text.

00;03;26;27 - 00;03;51;06
Jeremy
And even that in in many ways is this is kind of alien intelligence, because it is this, you know, as you say, collective intelligence. It has internalized the experiences of every person who's ever put to text their ideas and experiences. But you could very well have simulated anything. So in the context of a world model, you're doing this real time video prediction in the context of the foundation models that sit behind, you know, visual interaction and robotics.

00;03;51;08 - 00;04;16;18
Jeremy
You're attempting to train on a very different category of data and simulate that data in detail. I see the LM, the large language model, as the first in a generation of technologies and a paradigm of simulators. And the reality is that the environment, whether it be the, you know, website that you're experiencing, the internet itself, the entire computing experience, that environment can be generated in real time by these simulators.

00;04;16;23 - 00;04;39;28
Jeremy
And in that world, the feedback that we create with our environment creates a novel simulator. And on that basis, we have, what, what the reason, I guess I call it alien. We have an alien intelligence that is capable of becoming anything. The first thing it made sense to build was a simulator of humanity. But that obviously is not the only thing worth simulating.

00;04;40;01 - 00;05;05;24
Chris
So, Jeremy, the pushback right around this idea of it being alien intelligence boils down to like structuralism, right? And in terms of philosophy here, that, you know, to shorthand, it basically says if we don't have words for something, we can't know what's saying. And therefore, you know, the totality of our existence is bounded by language, it actually limits our thoughts, etc., etc., etc..

00;05;05;27 - 00;05;29;04
Chris
And that, you know, one of the criticisms, let's say, of early LMS or the pace of LMS is that we haven't had any radical new discoveries that, you know, we're just cross tabulating everything that is known. And sure, we get like novel combinatorics coming out of it, but, you know, any actual breakthroughs are bounded by the fact that these things are trained on words.

00;05;29;04 - 00;05;39;02
Chris
And, you know, anything, any new ideas are, you know, outside of its reach because the words are in the box.

00;05;39;05 - 00;06;08;10
Jeremy
Yeah. I guess, I do see the question of creativity as the central bottleneck to automated invention. And while there are umpteen examples, I think that, you know, most people would agree that AlphaFold and AlphaFold two and the ability to publish a comprehensive repository of the folding of every protein that underlies antibody connections, which enables the creation of novel drugs and pharmaceuticals, is like, you know, a great example of automated creativity.

00;06;08;14 - 00;06;30;01
Jeremy
And in a lot of contexts, blending is what human minds do. Now, I do see the creative systems as much broader than that. So certainly randomness injection in the context of blending allows for the creation of new ideas. But in many ways I actually think the ideas are not really the bottleneck that these, these systems continually blend, but also create all sorts of new ideas.

00;06;30;08 - 00;07;07;12
Jeremy
And the core bottleneck is actually the evaluation of the quality of the ideas and the representation of these automatically generated ideas as usable technological artifacts. And the basis of infinity. This company that I've started is that it's possible to run experiments on a GPU that accelerate the inference process by doing discovery that has a metric attached to it, where you test the ideas of an LM, which can comes out of blending, but actually can come out of, you know, a broad space that you specify in advance where you try, ideas, you know, in warping or in.

00;07;07;15 - 00;07;33;04
Jeremy
Yeah, a number of kernel techniques. So every major technique that, you know, you've discovered has been successful in the past can be applied, in theory, to every other, every other kernel. And so you can automate the process of attempting those discoveries, implementing them in code and evaluating them crucially with metrics, because the definition of creativity includes the, you know, creation of novel value novelties certainly has a reference set.

00;07;33;04 - 00;07;54;22
Jeremy
But value, has to be measured. And I think it's this body of systems like, you know, Alpha Evolve, which is able to discover, hundreds of millions of dollars in value for Google by finding new heuristics for which kernels to apply or AI systems that and have generated for generations of the TPU. These are like reinforcement learning algorithms that are in a feedback loop with the hardware.

00;07;54;24 - 00;08;05;11
Jeremy
These systems are deeply creative, and as soon as you attach a metric that allows you to evaluate their creativity, you can actually go wild with the implementation of creative algorithms that allow for automatic discovery.

00;08;05;13 - 00;08;24;13
Aaron
So can I play that back to me? So it's like by being able to run mass simulation across like an idea space, your gut tells you that they're already, creative in, in like, kind of, the way we think about that and increasingly will become creative just as the the speed of iteration through those different idea mazes increases.

00;08;24;13 - 00;08;27;23
Aaron
Is that like a fair way? To kind of play that back?

00;08;27;25 - 00;08;55;17
Jeremy
Yeah, that's definitely true. But you can also have them set out the space to say, I do think that there's a sort of open ended version of this. One thing I think is obvious to anyone who's used image generators is that these systems are superhuman at blending. So if you take two distinct ideas and you have a model attempt to generate images that as at the intersection of those ideas, you'll get dozens, immediately, tons of wondrous examples of creative outputs.

00;08;55;17 - 00;09;23;08
Jeremy
That is novel outputs that to you might be very valuable. And so it is clear that this particular creative algorithm, when that sort of point is, is being composition or recombination is already effectively implemented by these models. And this argument that, they can't be creative feels, like, well, if you define any particular form of creativity, it's often straightforward to implement an algorithm that exhibits that form of creativity.

00;09;23;10 - 00;09;41;27
Aaron
Yeah, I think that makes a lot of sense. I mean, I think it's that it's the speed, like you were saying before, it's probably why you're focused on this set at Infinity plus like the, like the ability to, to, to blend that probably will be like the, the next wave of creative output. But Chris, do you is that your belief that they aren't creative?

00;09;41;27 - 00;09;46;13
Aaron
Like I actually do think that they're pretty creative and will increasingly become creative.

00;09;46;16 - 00;09;47;24
Chris
No, I didn't say that.

00;09;47;26 - 00;09;50;19
Aaron
Yeah, I didn't know. I couldn't tell from your question or not. If you were just like.

00;09;50;19 - 00;10;08;06
Jeremy
Yeah, he he, I think accurately said that the pushback is that the training data distribution, is something that the model has to adhere to on some level. So if there's something if there's something that's not in the training data, you know, it's hard for the model to generate coherent content about that thing. This is a classic argument.

00;10;08;09 - 00;10;35;16
Jeremy
So typically actually the, the concepts in machine learning are in distribution versus out of distribution, where there's a statistical distribution of all of the data that you've trained the model on. And, you know, one argument against creativity in the LMS is, well, the only reason it seems creative is that it's seen every argument that's been made. The broader internet, for example, and can repeat some basic variant of that argument to you.

00;10;35;22 - 00;11;02;09
Jeremy
But actually, if you if you closely examine the training data, you'll see that the idea was invented by humans long ago, and we just weren't able to trace its genesis. And partially why you have to bring up blending is it's clear that, well, actually, the model is not just repeating what's in the training data. When you give it a prompt that has it attempt to blend to, you know, objects or two ideas that were never combined in the original training data source.

00;11;02;12 - 00;11;10;13
Jeremy
And so, I think that that argument is this kind of canonical place, like the the opening salvo in any conversation about creativity.

00;11;10;15 - 00;11;33;28
Chris
Yeah. And it also goes back to the alien intelligence point of view. Right. Like that was kind of where I was coming from in that saying, there's nothing alien about what these what machines trained on are a made by us and be trained by us. Right. Like they're an amplification and, you know, a force multiplier for ourselves.

00;11;33;29 - 00;11;54;03
Chris
Like if you actually want to get towards, like, you know, something that's actually alien, you got to like, look at an octopus, a creature that has eight different brains, you know, functions radically different than us, even to come up, I think, with a something, you know, that actually approximate what what alien intelligence actually could be.

00;11;54;05 - 00;12;37;24
Jeremy
Okay. This I object it I think that an octopus is actually far more similar to a human than than a base model. I don't know if you've spent time with state of the art base models, but maybe after this load up, you know, llama 405 B base. And you'll see that actually, unless you have constrained the outputs of these arms with RL, with, fine tuning process and reinforcement learning process that forces them to exhibit the personality of a helpful assistant that is attendant to your questions and answers them that by default, the outputs you get from a state of the art base model are wild and unpredictable and have vanishingly little to

00;12;37;24 - 00;13;06;26
Jeremy
do with it. Like, yeah, the kind of like human agent interaction which is practically useful to you and day to day life. And I think that outside of the text distribution, it's actually hard to even comprehend what's going on inside of world models. And so, for example, if you train a protein language model that, by default does not interact with text in any way, you scarcely have the ability to comprehend what is going on inside of the embedding states.

00;13;06;29 - 00;13;20;11
Jeremy
This deep model. And if you try to inspect, AlphaFold two embeddings and sort of come to some comprehension of the nature of the intelligence that's inside of it, I think you'll find it to be deeply, deeply alien.

00;13;20;13 - 00;13;32;18
Chris
Well, when it comes to protein folding, I'll agree with you there. I when I did orgo in college, that was beyond comprehension for me. I just could not wrap my head around it. So I can see the point on that one.

00;13;32;20 - 00;13;54;28
Jeremy
Yeah, I guess the point is the training data distribution that is most useful to humans is the most economically valuable subset of these alien intelligences that it's worth creating. And to be clear, it's worth investing hundreds of billions of dollars into because the the reality is that the value of the model, is what drives our ability to create it.

00;13;55;01 - 00;14;18;20
Jeremy
Yeah. The techno capital system is going to push into existence. The subset of these things that are useful, and it is non-trivial to train them. So actually you have to acquire capital in that capital demands a high return. And that return means that really you're constrained in the kinds of system that you create. You'll notice that, like even though you train tons of companies, of train tons of base models, they very rarely release them.

00;14;18;22 - 00;14;35;06
Jeremy
So I guess when I encounter people who've never talked to a base model, I just want them to know, like shake them and say, actually, there's there's actually something far more magnificent here than what you interact with every day that's far wilder and more alien than you can conceive of.

00;14;35;09 - 00;15;06;15
Aaron
Which is amazing. Well, three questions kind of, come to mind, or at least two. So let me ask them one, like who created these alien models? Number two, because I think it's going to increasingly matter. And understanding kind of like, like their origin story I think is is relevant. And two, you know, if the, data that these base models are being trained on matters to such a degree and as we, you know, train them with larger and larger data sets that are mostly human in origin, are we going to create more synthetic data?

00;15;06;17 - 00;15;14;03
Aaron
You know, like, how are we going to kind of push that frontier in two, three, 4 or 5 years? I kind of curious if you have any thoughts there.

00;15;14;05 - 00;15;15;07
Pri
Great question.

00;15;15;09 - 00;15;47;14
Jeremy
Yeah. The question of invention always sits deeply with me. So actually AGI House is really about the inventors of the intelligence age. And I think that the origin story of about Lem's, whether you talk about Alec Radford in the creation of GPT and GPT two or Noam Shazier preceding him creating TensorFlow and realizing for the first time the scaling hypothesis was real, there's like a number of researchers whose ideas and whose papers were foundational.

00;15;47;16 - 00;16;11;04
Jeremy
Good to go back in 2015 to Ilya Sutskever as a sequence to sequence. It's core idea that using the attention mechanisms, you can actually, take a sequence of text in and output a sequence. Prior to that, machine learning had primarily been about outputting, you know, predictions in the binary or sort of like multi no MLE predictions. And so there are a number of researchers who've taken step after step.

00;16;11;04 - 00;16;47;16
Jeremy
So, you know, we, and Alex Key actually famous for taking the GPU on as the context in which to do training, which enabled some of this magnificent scale. And yeah, I would say that the the technological trajectory, did seem far more alien until it was useful. So it wasn't really until 2022, with the invention of RL and the ability to constrain the outputs of the model to be helpful, to be valuable to the answers to your question, to be what you needed it to be, that these systems were really exposed to people at scale and with quad.

00;16;47;16 - 00;17;08;22
Jeremy
So I actually, back in June, you know, July 2022, I was added to a slack channel that anthropic had put together with quad. Typically it was preceded ChatGPT released by about five months. And it was clear to me at the time that they had built an AGI. It's actually part of the reason I renamed the house to AGI House is that they had achieved generality.

00;17;08;24 - 00;17;33;05
Jeremy
Prior to that point, oh models had actually forced you to execute a specific task. Even deep learning models like convolutional neural networks, trained on ImageNet, would only do image classification. But these models actually would attempt to solve arbitrary problems that they were given. And we ended up coining this term, the foundation model, which in my mind should have been the AGI concept in order to describe the new generality of this training paradigm.

00;17;33;07 - 00;17;42;09
Jeremy
And so those are the major touch points that I had. Kind of point you to in terms of the origins of who created this alien intelligence.

00;17;42;11 - 00;17;55;25
Aaron
Why do you think they so what do you think drew them to this? Do they want to create the alien intelligence? Was it just like, like fun, technical, you know, technological challenges, like, like what was kind of that atmosphere kind of related to that?

00;17;55;28 - 00;18;29;04
Jeremy
You know, each person has their own story. But in a lot of ways, the origin story of OpenAI, where Alec created this, was to attempt to save the world, so to speak. And so, yeah, well, you know, thank you. Grandiose about things. And his fear of Damascus abyss and the idea that DeepMind would succeed at solving intelligence and using it to solve everything else was a deep driver of his choice to fund the creation of a new research lab.

00;18;29;07 - 00;19;06;09
Jeremy
And, you know, demos and friends had come out with this phenomenal Atari reinforcement learning algorithm. And in many ways, that body of discoveries, oh, you can use deep learning to automate this agent activity. You know, it's funded by Peter Thiel, who had like, met them at the Singularity Summit. And so everyone who is obsessed with this singularity idea and at the fork, like Michael Vasser, who was putting on the Singularity Summit, and it started the Singularity Institute with at Koski, certainly, Rick Hertz, while the singularity is near, with this clear conception that humanity was going to experience the takeoff of technological progress in a recursive process that would come in the total transformation of

00;19;06;09 - 00;19;41;20
Jeremy
everything we knew and understood. That underlying philosophy was that the origin story of DeepMind, and was in Elon's mind when he recognized that this character, Demis Hassabis, who had won the Mind Sports Olympiad five times and was one of the most intelligent people on the planet, and who had created these video games in which a person takes on the persona of God as they attempt to sort of command their followers, was likely to, create a singularity worthy technology and thus, you know, he, Sam, Ilia, and Greg managed to do it first.

00;19;41;22 - 00;20;15;05
Jeremy
And, you know, there's some legal conflict that is exposed to that communications earlier, not an exact history. And it's clear that this sort of fear of Larry Page, who, seen, you know, some level of force that Mr. Neil was able to sort of get an acquisition of DeepMind, MIT Larry and Demis, we're going to build this thing and, Larry and Ian had this fascinating kind of conversation about, you know, whether or not it's species just to prefer, you know, continued human dominance versus allowing, you know, these sort of AI systems to proliferate.

00;20;15;07 - 00;20;45;18
Jeremy
And so a lot of these philosophical trajectories turned into $50 million of funding for OpenAI, who kicked off a research program actually deeply inspired by DeepMind. In this ensemble of impressive and and I guess to Sam and Alec and on some level, Bill Gates's credit that they managed to pivot out of deep learning into the sort of pretext task, like like massive large scale internet training, data prediction paradigm that sort of is, characterized by GPT and GPT two.

00;20;45;21 - 00;21;10;27
Jeremy
And certainly the creation of GPT three was this moment where everyone recognized like, oh, okay, like, this is going to work. This is here, this is functional. This is like a new category of intelligence entirely. It's not reinforcement learning. That sort of led that stage for many years. Actually. DeepMind was sort of in denial about, the position RL would play and ended up very much like Yann LeCun started being a cherry on top of, of, foundation model training paradigm.

00;21;11;00 - 00;21;36;12
Jeremy
And then, you know, these, these conceptions about the apocalypse, which drove, folks like Sam and Elon to sort of brand it opening line and believe that you know, somehow the collective had to be made better by this. Also drove Dario apart from Sam thinking, oh, actually, like, this particular organization is being insufficiently safety conscious, we need to create a safety first version of this company.

00;21;36;12 - 00;22;01;03
Jeremy
So we asked once again, kind of pulling on the exact same, you know, ideological, emotional line in order to create a new company anthropic. If infamously, I guess it did not release code this first API out of a promise that they would not accelerate timelines. Timelines to be clear towards what was conceived of as plausibly not an apocalyptic and to the singularity.

00;22;01;05 - 00;22;21;12
Jeremy
So I think a lot of people's motivations here sort of center around being the one who saves the world. If you look at the transition of the Singularity Institute to Mirai, Nate Suarez wrote this sort of blogpost on saving the world for like 20 effective altruists movement, so that, was a big part of how anthropic transitioned out of OpenAI.

00;22;21;12 - 00;22;52;15
Jeremy
I were obsessed with this concept of existential risk, and its reduction. And so playing, you know, the heroic role in that existential risk story is a huge part of the motivation of the creation of these organizations. And the bets that they make and the brands that they choose, even that OpenAI choosing the high brand, which at the time was kind of a crank ideology, was looked down upon by colleagues of mine at Google Brain who, who didn't believe that they were going to fulfill the totality of the dream, sort of, the 60s where folk thought that I would transform everything incredibly quickly.

00;22;52;21 - 00;22;59;19
Jeremy
That said, they mostly turned out to be right, and they built an AI system. And on that basis, we have this renaissance.

00;22;59;22 - 00;23;22;16
Aaron
Why do you think they were skeptical? Was it just like, battle scars from the decades beforehand because I actually just happened to watch some old Ray Kurzweil stuff, and he was musing that at some of the first, you know, conferences related to artificial intelligence, like in the 19 late 1950s, I think it was at like Dartmouth, like Marvin Minsky thought that all this would get like wrapped up in a couple months.

00;23;22;16 - 00;23;35;29
Aaron
And he was more skeptical related to it. So I'm kind of curious, like what what that conversation was like. And we've heard about that, right? There was some skepticism that this approach would work, but I'm kind of curious, you know, why they thought that that was the case?

00;23;36;02 - 00;24;03;13
Jeremy
I mean, it was actually frustrating and shocking. So at Google Brain, it was actually hard to get people on board with large language models. The belief in, the executive team, to have folk work under a very uncertain game, primarily was that these were hallucination machines, and they would damage primarily the reputation of Google. Because if you look at early comps, the ratio of hallucinations is wild.

00;24;03;16 - 00;24;23;10
Jeremy
And it wasn't until we got really high quality alarms these things got under control. But I think they were of two minds. First, that, if it works, it's a danger to search, which is a sort of core process by which Google makes money. Second, if it, is a hallucination machine, it will be damaging to the reputation of Google.

00;24;23;13 - 00;24;56;21
Jeremy
And so I think there are these two big arguments in the minds of these executives and, yeah. And now when it comes to skepticism of open AI, there is, an engineering and probabilistic, sort of engineering probabilistic thinking kind of perspective. So if you're working in, you know, Bayesian data analysis and you're, you know, a serious statistician and primarily your identities around, you know, mathematical depth and complexity, someone who is a, you know, a futurist is gauche as a character.

00;24;56;21 - 00;25;30;25
Jeremy
I think they have a lot of weird, strange and radical ideas that are mostly ungrounded. They're making predictions for which it doesn't seem like there's clear evidence. And, you know, in a lot of cases, these are rigorous people who are really interested in practicing science and engineering in a serious way. And so there's not a ton of room for kind of a while, the futuristic, personalities in their mind, despite the fact that they are more equipped to make progress, than someone who is sort of merely a philosopher.

00;25;30;28 - 00;25;55;02
Jeremy
And so I think it is organizations that manage to bridge the gap between, you know, incredible technical talent and sort of high flying, big picture philosophical vision that have the highest risk reward profiles of of organizations. Several people from brain created OpenAI as well. And so, there's really the continual threat that people well, the fact from one organization to the other.

00;25;55;05 - 00;25;57;24
Jeremy
So there's some political element to it as well.

00;25;57;27 - 00;26;33;26
Aaron
Said some combination of like worry about cannibalization, like human hubris. And then also lack of imagination. It was kind of that combination that, that, that maybe kind of, made people, dwell on that. And I guess, I mean, and I guess going back and even tying back to your initial point, maybe that's what these, these, increasingly alien seeming intelligences may not have those preconditions or, you know, potentially could just be the bias from some of those, you know, some of those priors such that maybe we get new stuff, we get new stuff faster or interesting things, etc..

00;26;34;03 - 00;26;41;17
Jeremy
Yeah, I do think the lacked imagination and I think people still lack imagination. Crazy is that may sound anger is.

00;26;41;17 - 00;26;49;11
Chris
No, that doesn't sound crazy at all. I think technology is like, did you use more imagination?

00;26;49;18 - 00;27;15;22
Aaron
Yeah, I actually think that that's like the root issue almost of our time is that, like, for some reason, we've like, like stunted our, like, ambition and were unwilling to kind of push into, like, more imaginative environments, like, we talked about this a couple weeks on the pod, but like, if we were around in the 1890s, right to 1960, like that entire world, just like, completely transformed and required a tremendous amount of optimism and imagination.

00;27;15;22 - 00;27;35;15
Aaron
I just feel like we don't we don't tap on that vein as much as we used to. Why? I'm I'm not 100% sure, but it feels like that's the case. I mean, think about how bold that was in the 1920s, being like, we're going to build rocket ships to go to the moon, right? They did it a couple decades later, or connect the entire world with electricity and telecommunications.

00;27;35;15 - 00;27;35;23
Aaron
Right.

00;27;35;29 - 00;28;06;15
Jeremy
So so I wrote a 100 pager on this, called title on Progress and Stagnation. So if you Google it, you can find it. And the core idea is that stagnation, you know, as a concept has been popularized by Peter Thiel, has characterized the last few decades of, you know, certainly American technological progress and in many, many ways, worldwide technological progress, which makes this distinction between the world of atoms and the world of bits and the world of bits, where it's, you know, software.

00;28;06;23 - 00;28;30;07
Jeremy
There are plenty of examples of progress, you know, Google, social media, cellular phones. But in the world of atoms, you bring up the space race. What was happening in rocketry was was tragic prior to space acts. Certainly when it was happening in transportation as also tragic. You know, I mean, if you are moving slower, trying to build railroads, the United States really wasn't working, despite the fact that it was clearly technically possible.

00;28;30;15 - 00;29;04;07
Jeremy
And a lot of these political and cultural forms of malaise were were damming, great ideas. So, for example, we used to have supersonic flight. We used to have the Concorde. It's no longer the case that, you know, you could take a supersonic flight from anywhere to anywhere. And so there's this sort of cultural question about whether the forces that be of everything from environmentalism to sort of new political ideas are actually, I, you know, deeply neo luddite types of, a community where primarily people are opposed to scientific progress and opposed to technological progress.

00;29;04;09 - 00;29;31;13
Jeremy
There's a question of whether it's nuclear fear. On some level, I think scientists, especially physicists, felt in the face of the atomic bomb that our scientific ambitions were apocalyptic by their nature. And on that basis, cultures of scientific progress were, were seen as dangerous. And, you know, you could form entire, you know, and ideological traditions in this kind of opposition to scientific progress.

00;29;31;15 - 00;29;56;06
Jeremy
And, you know, you can actually see the transition, for quickly as you koski in the rationality community, from being transhumanists to being anti existential risk neo luddites, as exactly this category of apocalyptic, fear centric communal transition. And yeah, I think there's, a deep philosophical conversation happening in almost everyone's hearts on that basis.

00;29;56;08 - 00;30;29;25
Chris
Hey, Jeremy, can we talk? You use the word apocalyptic a lot. It's probably the most, use word on on on this pod so far. What is the philosophical breadth and depth of this community? Because this is something, as an outsider, isn't entirely clear to me how well-rounded these people are. Are they? Do they have a preference for more simplistic solutions like, I'm not scared of machines, I'm scared of the influence effective altruism has over the people building the machines.

00;30;29;25 - 00;30;59;27
Chris
I personally hate Peter Singer, full stop. You know, and then as we start getting into ideas of eschatology, I don't know if you've, read the Megan a Believer book. What? What is the human animal machine? God. But she basically boils. Boils? All of this, transhumanism and futurism down to, a transference of eschatology, from Christ onto, you know, our engineering and building capacity.

00;30;59;27 - 00;31;15;22
Chris
Like we are these people who are so worried about an apocalypse, you know, how how well versed are they, you know, and and are they actually drawing from a full range of sources to have this, you know, sort of fear? Is it justified or not?

00;31;15;24 - 00;31;50;05
Jeremy
I think that the thesis that there's eschatology, logical transition is accurate. I think that actually Abrahamic religions, which have a clear tradition of apocalypse, are much more receptive to the transference of that idea onto artificial intelligence than, for example, Chinese culture, which has much less of this kind of religious tradition. You may know that, you know, Murdoch's drama The Anthropic Employee quit infamously.

00;31;50;05 - 00;32;14;28
Jeremy
I think it was all over national news, because of fear that the world is in peril. And in his resignation letter, he, you know, brought up a lot of AI centric bioterrorism risks. And so it is the case that anthropic is very much wrestling internally with employees whose psychology has to do with the world being in peril.

00;32;15;00 - 00;32;45;15
Jeremy
Now, I think that a lot of these cultural forces are actually, not particularly grounded in feedback loops with reality. And a big part of why I was open to creating a dry house, which is, as a brand, still quite acceleration as the nature is, that I believe that we are in more danger from failing to make progress and continuing to live in a world where everyone we know is dying by natural causes.

00;32;45;17 - 00;32;53;26
Jeremy
Then yeah, we are risking, runaway feedback process that, consumes everything. Yeah, I guess I'm.

00;32;53;26 - 00;33;10;13
Aaron
With you on that, Jeremy. Completely. Because, you know, it always strikes me, you know, even, like Daria was on the New York Times with, Russ. Don't too had. I can never pronounce his last name, but he's a great interviewer, and it's so it's so apocalyptic. And I'm just like, I don't know, this seems like their problems.

00;33;10;18 - 00;33;28;15
Aaron
We should acknowledge them. We may want to address them, but I just feel like the, you know, human creativity or even human plus AI creativity could, come up with some reasonable solutions for whatever reasonable risks people are identifying. It just feels like they want it to be apocalyptic in some sort of way. Like it it like it's like.

00;33;28;15 - 00;33;29;14
Jeremy
Some weird way to save the.

00;33;29;14 - 00;33;35;17
Aaron
World. Yeah. It's like a weird fetish of theirs. It's. It's like, almost like fetishized it. They're like, so into it.

00;33;35;19 - 00;33;40;00
Jeremy
They don't give it enough credit. It actually is responsible for their existence. Oh, wow.

00;33;40;02 - 00;34;04;21
Chris
They sure it's responsible for existence. Is it grounded? Right. Like, that's that's what I'm trying to get at is these people who stay up at night and are worried about these things. Have they, one shotted themselves because they have such, like, a narrow, you know, specialist focus and, you know, they haven't given enough time to humanities, philosophy, you know, any number of other fields.

00;34;04;21 - 00;34;34;04
Chris
And, you know, they're they've kind of info hazarded themselves or are these legitimate concerns and I do they are legitimate concerns. I'm not saying that a I cannot bring about the end of the world, but the focus and the emphasis and the worry about that is, you know, what I'm curious about? Because, you know, if you're outside the industry, if you're like us here in New York City, and you're not running around these circles having these conversations.

00;34;34;04 - 00;34;59;02
Chris
And so your data points are like, you know, teapot and yak on Twitter in which like an of feedback siddartha to them and they're one shot and you're like, what the hell are these people? And then you've got like, you know Dario out there, you know, doing doing the same. It's really hard to get a great sense of, these people legit and their concerns from like a basis of exposure and understanding.

00;34;59;04 - 00;35;28;19
Jeremy
Maybe you should think of it this way. What is the most incredible and epic way to live your life? One answer might be that your life can be the fulcrum on which the totality of the future of human civilization depends. There are very few ways to live out a real life that has that kind of maximal counterfactual. That is, you know, your decisions are changing the trajectory of civilization, the kind of, value.

00;35;28;21 - 00;35;56;16
Jeremy
And so it is arguable that this transition to a new category of intelligence, which, you know, human intelligence was responsible for building the totality of human civilization. Well, what will this form of intelligence be capable of? But this is actually a unique opportunity for the subset of minds that are interested in total total value maximization. Let's put it that way.

00;35;56;18 - 00;36;23;16
Jeremy
And in the world where you have written apocalypse, that level of meaning where like every action you take, every word you speak, is in some probabilistic way engaged with the totality of the existence of this human project. Is far more meaningful as a value system than any competing meaning system. It's it's more grounded in its apocalyptic fear than climate change.

00;36;23;18 - 00;36;55;08
Jeremy
It's more technically intellectual. And so there's space for sort of filling a curious mind with lots of very detailed ideas about how it could or could not happen, and research trajectories that may or may not enable it. And there's a number of really intelligent people with whom you can commune about these ideas. And so there's yeah, there's a number of folk who, like, were brought in to this, I don't know, this effective altruist project, in part with the intention of growing the movement.

00;36;55;08 - 00;37;37;04
Jeremy
So, you know, on every college campus, there's this a branch, you know, Harvard, Yale, MIT, etc. having college students, bright college students think through the consequences for the trajectory of human civilization, of the advent of artificial intelligence. And their conclusion might be that it's utopic. Their conclusion might be that there's some eschatological implication, but actually it allows for what typically is a converted former Christian to take a number of religious instincts and, yes, project them onto technology, but also situate their own personal life actions as being potentially the most important actions that have ever existed.

00;37;37;07 - 00;38;03;11
Chris
Right. And so that that's kind of the point I keep probing at here is this is a not naive and unsophisticated view of the world in which someone has not had a confrontation with nihilism, someone has not, like, explored concepts of ego, death. Like, these are people in search of meaning, systems who perhaps are unwilling to confront the fact that we live in a world without meaning.

00;38;03;11 - 00;38;35;22
Chris
We live in a universe. It is silent, uncaring, and that it is up for, you know, it's for each and each person to find their own meaning in the world. Like these are people who say transcendence, right? This is like I'm working on this apocalyptic scale, when in fact, like, you know, if you kind of probe the depths of philosophy, they all stall out around a two basic points, like either you got to believe in God and you got to embrace dualism or life is the meaning you make for yourself, right?

00;38;35;22 - 00;38;47;24
Chris
Like that's kind of where like both these branches like Peter out and to not cross that chasm and to set yourself up as a superhero, right, is just like a naive understanding of the world.

00;38;47;26 - 00;38;50;16
Jeremy
I mean, I think the story is far more interesting than that.

00;38;50;20 - 00;38;51;13
Aaron
How so?

00;38;51;16 - 00;39;16;16
Jeremy
Yeah, I think that in the absence of technology that really worked, that would be a reasonable read. But I think there's another read of the scenario, which is, you know, the idea that this technology works and it is it's not merely a big deal. It changes everything the the way that all work and all thought is done is about to change.

00;39;16;16 - 00;39;52;16
Jeremy
The way that everything is invented is about to change. And, you know, in the world where you accelerate scientific progress by five years, maybe that's chill, but maybe you accelerated by 50 years. Maybe you simultaneously get nuclear weapons and you get spaceflight, and you get supersonic aircraft and you get surgeries that allow the implementation of brain computer interfaces, that allow for the enablement of all sorts of phenomenal cyber, you know, cyborg style progress.

00;39;52;18 - 00;40;19;06
Jeremy
Maybe things accelerate to a point where you can no longer comprehend what's happening or control the scenario. A lot of these are relatively grounded expectations. I think that a lot of people have watched this trajectory of AI progress, think that in the next 9 to 12 months, AI systems will be able to recursively self-improving that they will be able to perform at the level of an elite AI researcher.

00;40;19;09 - 00;40;52;23
Jeremy
At that point, the amount of computers is really the key to the pure bottleneck, to discovering new methods and techniques in artificial intelligence itself and to infinity as a company is is sort of exactly the style of company. It's an AI system. It automates the discovery of new AI algorithms and uses metric centricity to do it. And so I think there is some grounding to the the projection that not only in the next five years will almost every knowledge work job be turned into inference.

00;40;52;26 - 00;41;16;18
Jeremy
But the most important civilization scale work, which is primarily scientific discovery and progress. These foundations of the enlightenment, that the ability of AI systems to conduct research at scale autonomously, will unlock hundreds of technologies a year that are genuinely important. And so there is this big question of, let's.

00;41;16;18 - 00;41;18;07
Aaron
Go, it's going to be awesome.

00;41;18;09 - 00;41;21;16
Jeremy
Yeah. And maybe LFG is one reply, but.

00;41;21;18 - 00;41;45;11
Chris
How do you feel that differs, though, then, from when we got the new physics? Because when you said simultaneously all these things right, like we did get rockets and atomic weapons and you know, like all of that came out of the generation of physicists followed Einstein, a smaller group, maybe, you know, less leverage and, you know, ability to impact the world.

00;41;45;11 - 00;41;53;11
Chris
But they certainly did impact the world. Like, there are precedents for, you know, this community you're talking about.

00;41;53;14 - 00;42;20;07
Jeremy
I do think that one unusual aspect of computer science is this sort of question of speed. When you write a program, the version of the program that runs 100 or 1000 times faster does not look different in any substantive way than the one that runs at normal speed. And so it's true that we got these technologies, but we got them over the course of a few decades.

00;42;20;10 - 00;42;52;09
Jeremy
I think that one important question is whether there is a point at which inference becomes dramatically cheaper, and the scale at which automated inference for automated research can be executed is sufficient in the face of an algorithmic discovery that we get everything simultaneously, we imagine, for example, within a week, getting the internet and radio and the printing press and this, you know, steam power, nuclear power, vaccination.

00;42;52;10 - 00;43;16;19
Jeremy
Imagine, like every single conceivable research project that's within the composition of existing human research being invented in more or less in parallel, where the physical representation of those in that's great. Plausibly great. Yeah. Plausibly great. But I think this is like, this is not a fake story. So it sounds like sci fi, but actually.

00;43;16;22 - 00;43;38;06
Aaron
No, I think it's I think it's inevitable. Yeah. Yeah, I think it's inevitable at this point. Yeah. But then, you know, I think it's like an incomplete story because then there's like diffusion, like, how fast is this stuff going to get operationalized, right? Like there's the world of atoms that you talked about. Sure. Like we may get some of those advancements, but it will take some time for, for that to get embodied in some sort of machine or device that can actually implement it.

00;43;38;06 - 00;43;57;02
Aaron
I just think it's a very complex story. And I think, Chris, like, and maybe it's almost like a natural corollary to we saw a lot of folks, you know, use, you know, equity based arguments as like a new form of religion. Right? I feel like this is almost like, like another subtheme related to that. Like, people are looking for hope.

00;43;57;02 - 00;44;16;16
Aaron
They want to be apocalyptic related to it. They can only see, you know, they they see downside risk, which is fair, but they don't realize, like how complicated and dynamic a system we live in and how the price will be things that kind of self-correct related to that. But, you know, maybe I'm wrong on that front, but I've always been much more optimistic about these pieces.

00;44;16;18 - 00;44;32;01
Aaron
I think it's important to, like, obsess about that worry, worrisome parts of it. And hopefully people are. But I don't know, it just like strikes me as like attention grabbing. They want power in some sort of way. There's like some gnarly edge to some of this, like apocalyptic talk. Like they they want to like to.

00;44;32;01 - 00;44;42;08
Jeremy
Think about this central a central. And I think that's, that's most of the argument. It's like by creating anthropic, we will be the ones in control when. Yeah. Like we should say the word occurs.

00;44;42;11 - 00;44;55;16
Aaron
Yeah. We should be the one controlling that destiny. Like we need to be the high priests of the singularity, you know, not somebody else. And to me, that that's like, feels wrong, in some capacity. The whole thing kind of seems wrong. But maybe I'm being naive.

00;44;55;21 - 00;45;24;27
Pri
I agree, but do you think there'll be some, like, political or cultural backlash to that kind of line of reasoning from anthropic and people wanting to use those models? Like, do you think people will start culturally aligning themselves with what they believe if they feel like? Because, I mean, in many ways, like you could there's a very paternalistic political dynamic with how, you know, if this is a view that the anthropic feels like they know better than maybe the users themselves or, you know, they feel like they're protecting users.

00;45;25;00 - 00;45;33;21
Pri
I could imagine people potentially railing against that if they start feeling that in the usage of the app itself.

00;45;33;23 - 00;46;10;18
Jeremy
Yeah. I mean, in many ways, there's no controlling the progress anymore. You may have seen that minimax just released a model that outperforms Ibis four six on Sweep bench 2.0, that these open source rivals have closed the gap community in key 2.5 that there are so many capabilities that are understood by multiple companies and where the ability to make progress is being distributed, I don't open I started with this idea of we're going to open source everything and turn to a closed AI, practically speaking.

00;46;10;25 - 00;46;35;11
Jeremy
But the reality of open AI is alive and well in the world today. There's a movement in anthropic to, kind of coordinate progress. Using legislation is almost, admission that they don't expect to be able to sort of hold down the hatches on automated progress themselves, despite making it, you know, their public career position that they're working on automating and AI research.

00;46;35;14 - 00;46;48;17
Jeremy
And so I think there's a, yeah, very real game theoretic dynamic, which everyone is enmeshed in and which cannot be opted out of. Yeah.

00;46;48;17 - 00;47;04;28
Aaron
And OpenAI is doing the same. Right. I think they announced with their last model that it was partially developed by the AI system itself. So do you think that's further along internally than what they're publicly disclosing? Do you think they kind of ran through the game theory using these systems too? And like that's the strategy that they're implementing?

00;47;04;28 - 00;47;19;09
Aaron
I mean, that's what I, I've always thought that like if they do have like a super powerful model, like a there's an incentive to not release it. Right. And to use that kind of for your own asymmetric advantage. I always wonder if these strategies are kind of, influenced by that.

00;47;19;15 - 00;48;00;16
Jeremy
That's true. I don't think there is a thing that they can use the model for, symmetric within that overcomes their need to continually raise tremendous amounts of capital and, practically speaking, continually release models that are superior to their competitors models. I'm sure you are familiar with the Sam Altman Code Red in early December, in the face of total anthropic dominance with respect to coding agents and there's really little, sort of capacity to not go after automated AI research in a world where it's clear that in your failure to go after it successfully, you will become irrelevant relative to your competitors.

00;48;00;23 - 00;48;18;24
Jeremy
And so that is the game theory that I refer to. It is possible that any AI system would sort of figure out a way out of. Yes, sort of game theoretic set, I am, but, I think that it is, obvious scenario that was sort of anticipated by, you know, many thoughtful people for, you know, a decade or two decades in advance.

00;48;18;27 - 00;48;36;00
Jeremy
And now the thing, like, I mostly disagree with those people about is that there's like a single point to which all of this is converging, or there's a single point at which we have, like the AGI, which, yeah. At which point everything changes. It seems like there's kind of just a continuous improvement in capabilities across the board.

00;48;36;02 - 00;49;13;10
Jeremy
And, we will continue to have revolutions like, you know, quad code revolution of late 2025 and the open cloud revolution of early 26. And yeah, I guess it does feel like we are on a trajectory that is building, and is now as opposed to dying out. Right. And that, you know, this year will be a year in which, you know, tons of agents start to work and work well, and a huge fraction of, our work will become interacting with autonomous systems that are performing most of our work on our behalf and the, you know, ease of customization of those systems and their security and their simplicity will be a big part of,

00;49;13;12 - 00;49;37;16
Jeremy
of progress this year. And so I still I just don't think that, there's any clear way out of this happening, even if anthropic and opening. I go home today. There are just umpteen organizations that can functionally, fulfill, their promises. Certainly. You know, Gemini, certainly minimax, certainly Kimi and deep sea cat Quinn and, GLM for seven as phenomenal.

00;49;37;16 - 00;49;50;28
Jeremy
There's so many, there's so many functional tools that you can use today. And teams who will kind of pick up the torch, so to speak, if, if they choose to conquer themselves out now.

00;49;51;01 - 00;50;10;23
Chris
Oh, I in anthropic right, are in this Darwinian struggle in which they need capital markets. Therefore, you know, they have to prove their worth to continuing to get more capital. Google is positioned a little bit differently, and it feels like, especially when it comes to what they're doing in DeepMind, they prefer to sit on a lot of their advancements.

00;50;10;25 - 00;50;13;14
Chris
How much is Google holding back?

00;50;13;16 - 00;50;30;04
Aaron
Yeah, I mean, that's my it kind of isn't related to my question, Jeremy, I, I hear you on that that and it and playing it back when you were saying that I was thinking to myself, okay, the the demand for capital and compute is the constraint. So it's pushing, you know, these organizations to release as fast as possible.

00;50;30;07 - 00;50;51;04
Aaron
But I do think that there still is some delay. And at some point there is like an asymmetry that could emerge, especially especially if it's self-improving. Right, where, you know, where they could develop like, like a long term asymmetrical edge, you know, where, okay, we're going to we're going to keep something that's like a little bit more powerful behind behind the scenes release it, you know, in accordance with our competitors.

00;50;51;11 - 00;51;08;26
Aaron
But this will kind of give us like a primacy in terms of the market and in terms of a long term kind of game theoretical outcomes. And it feels like we're if we're not there, we're kind of edging towards that. You know, Chris, I don't know if it's like the genie advancements that kind of give you that, that question.

00;51;08;26 - 00;51;15;07
Aaron
But they kind of give me that question too, because that that feels completely alien to me. Just in terms of how that like, operates and works.

00;51;15;09 - 00;51;23;09
Jeremy
Yeah. I used to think that there was about nine months of hold back, so to speak, and I currently think that there's less than one month of hold back.

00;51;23;11 - 00;51;33;12
Aaron
Got it. So so you think that that's the capital constraints are now which is good, right. The capital the demand for capital, the demand for compute is actually leading to a more competitive environment.

00;51;33;14 - 00;51;45;11
Jeremy
That maybe it should be more concrete. So OpenAI completed the training of GPT four in September 2023 and did not release GPT four until March 14th, 2024.

00;51;45;14 - 00;51;50;10
Aaron
Right. So there was a, nine month, a little 910 month later, right? Yeah.

00;51;50;12 - 00;52;11;00
Jeremy
Yeah, exactly. Whereas, in the last month and a half, anthropic released opus 46 and, OpenAI released, five, three, five, three. And so I think they're releasing incremental upgrades to everything that they build with as more or less as they train.

00;52;11;00 - 00;52;25;06
Aaron
It is, is that where some of the friction is coming from the trust and the, you know, the trust and safety teams are, however they're describing them is that they're they feel like they're not doing enough, like internal testing to make sure it doesn't have some nefarious edge case.

00;52;25;08 - 00;52;37;23
Jeremy
I mean, how relevant is that argument? And a world where all better rated versions of open source state of the art models that are as capable as opus are released on the open web?

00;52;37;26 - 00;52;51;29
Aaron
I'm with you. Yeah. I mean, to me, it, I, I would lean in your camp on that one, but I could see why that would, lead to some friction or concern. Especially if you do have this, like, doomsday, apocalyptic, vantage point.

00;52;52;00 - 00;53;17;10
Jeremy
I think a lot of folks are recognizing that we are not all dead yet, and we have incredibly potent systems, and they haven't changed in typer and category dramatically. And so it does feel like we can predict, at least to some degree, what capabilities will look like. And none of those capabilities look apocalyptic. So I think actually there's a quite a severe reduction in the amount of explicit apocalyptic thinking.

00;53;17;13 - 00;53;32;02
Jeremy
Yeah, that's happening in these labs and among these communities. And so it's kind of refreshing on some level. There's the simultaneous like, oh, like it was a relief, but simultaneously there's there's a more total loss of control.

00;53;32;04 - 00;53;48;11
Aaron
So it's like the, the, the, marketplace for ideas is whittling out some bad ideas, potentially. And now there's just like the, concern that the concern related to, you know, it's spinning out of some weird control. Super fascinating.

00;53;48;13 - 00;54;07;20
Jeremy
Yeah. I guess that, I think that there's also a lot that can be done around invention here, and there are many ways, like open an anthropic copy one another two degree that makes it hard for them to be creative. I think they're just a lot of ideas that, that deserve that level of devotion and intention.

00;54;07;22 - 00;54;42;07
Jeremy
For example, it felt obvious that we would get web simulators in the face of really high quality code generation models. It's sort of my roommate in our house, Anton of Sica, built lovable. Ouch exploded early last year, and revenues as people created tons of websites at scale and the, you know, experience of a web that is totally generative with generative web, where every website that you go to is created on demand or is co-created collectively by everyone who uses it and is an a feedback loop with every user that just doesn't exist.

00;54;42;09 - 00;55;01;22
Jeremy
There are a lot of ideas like that that I expected to take fire that have been like in my opinion, lost to the race to build a very specific kind of system where there's a straitjacket on the backs of every research group that takes on tremendous amounts of capital to produce a very specific kind of result and produce that on a ridiculous timeline.

00;55;01;25 - 00;55;06;18
Jeremy
And so there is just no space for collective creativity.

00;55;06;20 - 00;55;38;04
Chris
Yeah. Earlier you mentioned, you know, that these things are capable of alien intelligence, that no one is actually gone. And, you know, spent the capital to develop something truly novel because of these, you know, economic conditions or this Darwinian competitive struggle. Let's say someone walked up and gave you $1 billion and, you know, 40 genius people and said, look, I just need something novel here, right?

00;55;38;04 - 00;55;59;22
Chris
Give us something we've never seen before. What direction would you go in? How would you go about doing it? You know, like, what does it actually take to step outside late capitalism and, you know, produce something completely net new because you're implying we have the technology today. We're just not actually applying it.

00;55;59;24 - 00;56;31;24
Jeremy
Yeah, I believe that. I would, you know, immediately start you just give you, you know, 5 to 10 somewhat radical ideas. So there was this victory at the hackathon in 2023, with a system, called LMS are all you need for the back end. And so I would love to have apps which treat the context window as a database and which attempt to allow you to create arbitrary applications where the the database can basically be flexibly whatever you need it to be as soon as you need it.

00;56;31;27 - 00;56;56;19
Jeremy
I would love to create a, data source, a shared data source. You know, with it you can think of cloud based on some of what's being this with every conceivable like email and contact info via LinkedIn, via Twitter, etc. etc. that a person has and let that data source be flexibly queryable by a single generation system that allows you to build arbitrary apps on top of the sort of like total data access layer.

00;56;56;21 - 00;57;30;01
Jeremy
On some level, you can see Cloud Bot, which allows you to get into all that of these data sources as being something like that. But I feel like until very recently, we didn't have any systems which would enable that kind of comprehensive data access. I would like to build some, really base soul systems. So when your computer, operating system as is acting as generating, you know, your web page here, you know, video stream, your, you know, text messages, every single action that's being taken on your computer is represented on these OS system calls.

00;57;30;06 - 00;57;55;14
Jeremy
So I'd love to train a model that is a simulator of that OS systems call, first and foremost, but that eventually allows you to create a computer out of this sort of systems generation process, which is a new kind of computer that, practically speaking, can execute arbitrary programmatic actions as a function of your desire and plausibly, you know, interactable via a voice interface.

00;57;55;16 - 00;58;21;14
Jeremy
As I said, I'd love to create a true web simulation where people primarily spend time in generated environments. Worked here. By environment, I mean the website. I'd like to build company generators. So the obvious question is when will we get an end to end company generation? That is about marketing, but a decent amount of it is like attaching payments, to web apps into phone apps in a way that's like cleanly, integrated.

00;58;21;14 - 00;58;47;17
Jeremy
And that allows you to go from idea to functional paid app, because as soon as that category of system can make profit on the margin, it can explode into the creation of every profitable business that is generating all that can happen, more or less simultaneously. As soon as it's basically reliably, the system is able to reliably predict which companies that it creates will be, will be profitable.

00;58;47;19 - 00;59;33;14
Jeremy
Certainly there's a bunch of very, wild card on base, model centric, ideas. So I think that they are wildly creative and that early chaff, can be treated as a kind of mistake in technological trajectories and on top of the base model, you can do tons of research around, simulators and around automated creativity. And, you can have these base model cyborgs be generating worlds so you can sort of center the, the environment generation task as like the most important task, where rather than focusing on replicating language, you try to replicate the environment and then replicating it successfully, have environmental feedback.

00;59;33;17 - 00;59;52;12
Jeremy
And so, you know, you can see the website generator, right. Internet generator, practically speaking, as being like an environment model, which is primarily trained so that you exist inside of the network's generations as opposed to querying it as like an assistant to solve your problems for you.

00;59;52;15 - 00;59;56;19
Pri
Jeremy, I love that you just, like, spun up, like, product ideas. That I.

00;59;56;19 - 01;00;01;24
Aaron
Didn't know what else to say here, but maybe the right phrases fuck yes, let's do it.

01;00;01;26 - 01;00;18;13
Pri
I know, I think it's a you're right though. A lot of these people do have golden hand, like the people who are capable of executing at a high level. Many of them not all, of course, are like completely either. Like whatever. Golden handcuffs are the expression you use, like, they're like in a straitjacket a little bit. It's it's like we're kind of in a tough spot.

01;00;18;13 - 01;00;23;00
Pri
You have to be willing to leave that to then go a little bit more imaginative.

01;00;23;03 - 01;00;48;00
Jeremy
Yeah. And I think for every major category of creativity, there should be probably multiple companies, inventing novel solutions to, you know, among other things, like cancer therapies that involve like, antibody generation with matrix centric feedback and where, yeah, I guess most subfields of science, you know, including theoretical physics thing. And my friend Steve Sue just published a paper, you looking at the Schrodinger equations?

01;00;48;00 - 01;01;13;27
Jeremy
Not in the form of the Feynman path integral, but, with another formulation. And in that formulation, there are a lot of sort of applications that those those equations may not be linear, that there may be other terms. And so, like the Schrodinger equations might admit to something like relativity or something at that scale. And I'm sort of surprised at how few people are even, you know, working hard in this category of automated physics research, like there's no frontier physics data set.

01;01;13;27 - 01;01;33;08
Jeremy
Even I have some rumors that this company, fiscal superintelligence, is working on this, but empirically like that just hasn't happened yet. I think a lot of automated research was set back by the Galactic response and the Max Planck Institute, and that a huge fraction of the scientific edifice is are going to stand in the way of this new paradigm in scientific progress.

01;01;33;11 - 01;01;42;08
Aaron
Can you unpack that? What was that? And why do you think that that's the case? Is that just a stability argument that they were worried about or like,

01;01;42;10 - 01;01;45;03
Jeremy
I'm so sorry. It's about Galactic Max Planck or about.

01;01;45;03 - 01;01;47;29
Aaron
Yeah, the galactic Max Planck point. Yeah.

01;01;48;01 - 01;02;13;01
Jeremy
Yeah. So the Galactic a story is really beautiful in that people created a model that generated later. As you may know, the tech allows you to write scientific papers, and so you can generate, tables, you can generate visualizations that are embedded, and so this sort of scientific knowledge gen system would create formulas and Wikipedia style articles that were gorgeous.

01;02;13;07 - 01;02;47;02
Jeremy
And so meta released it with this insane video, really fast music, really intense esthetics. And there was this in sane response from the scientific community that this algorithm hallucinated. Actually, I think it's close to the fear that Google executives had, which grounded their sort of choice not to work at language models at all. But Galactica, was critiqued by Michael Black, who's the director of the Max Planck Institute.

01;02;47;04 - 01;03;15;22
Jeremy
And he, you know, called out Galactic. I said he was troubled that it's easy to generate content, but it is confident in how right it is. It sounds authoritative, but actually it generates hallucinations. It's because of these sort of fake paper citations or sort of incorrect answers so that he felt it was dangerous. He was afraid that let's go arctica style text would slip into real scientific submissions.

01;03;15;24 - 01;03;45;07
Jeremy
Be hard to detect and and on that basis, be damning to sort of scientific progress that that scientific deep fakes would proliferate. And so meta took Galactica down the council, the project people who are interested in launching scientific projects were warned away by people like Michael Black, who would call any mistakes made by the eye out for being damning to the future of scientific progress.

01;03;45;10 - 01;04;22;00
Jeremy
So for about two years, there were no substantive launches of AI science centric projects. It was it was a canceled subfield of research. Now, I don't think Michael found anything other than a desire to defend and protect the edifice of scientific capability. But it felt obvious to me, that Galactica was launched in November 2022. So this was two weeks before the launch of ChatGPT and so we hadn't overcome a lot of these hallucination problems which which are technical problems at the end of the day.

01;04;22;02 - 01;04;37;28
Jeremy
So search in the loop, you know, perplexity actually hadn't watched yet. Yeah. There were a lot of really important ideas that resolved these problems that had not yet been invented. And so I felt like it was tragic that this sort of got, black mark from the scientific community.

01;04;38;00 - 01;05;01;00
Chris
Jeremy, there's so much research or, like, everything around it to some degree is predicated on predicting the next token and being a prediction machine. Is there research around decisions not made? You know, especially as we start getting into world models in simulation. Right. Like everything is about what happened before and how do we anticipate what happens next.

01;05;01;00 - 01;05;06;04
Chris
But what about like negative space? Is anyone actually looking at that?

01;05;06;07 - 01;05;38;02
Jeremy
Well, there's, a company that launched yesterday called Simula from the Stanford folks who created this island of AI agents that were, were interested in, replicating the behavior inside of a human like society where every agent kind of performed the role of a person in that society. They make tons of decisions. I think when tragedy and decision making evaluation is, measuring counterfactuals is usually very difficult.

01;05;38;02 - 01;06;03;01
Jeremy
So, you know, what if you made a different decision, what would have happened? Well, you actually have to play out what the world was like in that case, long enough to see through to the consequences of the decision and creating simulations that are that accurate is actually incredibly challenging. And so you might think, well, actually, any particular simulation path is going to be noisy.

01;06;03;01 - 01;06;31;29
Jeremy
They're going to be hundreds of other decisions that are made after your decision, but that are germane to the consequence that you care about. And so in order to get statistical significance on whether that counterfactual decision was the right one, you have to spin up thousands or tens of thousands or millions of simulations to reduce the probability estimate window to something that feels like you could depend on it.

01;06;32;01 - 01;06;51;09
Jeremy
And as far as I can tell, that is a very robust constraint and the overall accuracy of the simulation of the world that you need to make these decisions effectively is far higher than present day. Gigantic simulation systems are capable of representing.

01;06;51;12 - 01;07;11;28
Chris
This is simply a ton of different. You're saying? Yeah, because I mean, the problem with bad ideas is it's really hard to prove a bad idea is a bad idea until it's been proven a bad idea. So bad ideas can hang around forever. Eventually, everyone gets it through their head that this was a bad idea, but you can't disprove them.

01;07;12;01 - 01;07;17;18
Chris
You know what I mean? And that just I'd love to be able to get bad ideas out of the way. Straight up.

01;07;17;21 - 01;07;29;14
Jeremy
Yeah, I think there are, interesting sci fi questions I think look like versus, you know, psychohistory and foundation. But I do you think the statistical problems here are substantial? Substantial.

01;07;29;14 - 01;07;41;05
Aaron
Wait, wait, can we pause on that? Jeremy? Do you think so? I think that's the endpoint is psychohistory. Do you think that's a bad idea or do you think that that's ultimately where we go? Because that's been an animating concept for me.

01;07;41;07 - 01;07;48;08
Jeremy
But yeah, I guess I would I would love for it to exist. It just, it just seems like a much harder statistical problem.

01;07;48;10 - 01;07;49;11
Aaron
I think that's the hardest.

01;07;49;11 - 01;08;24;04
Jeremy
Problem I want you to solve right now. And yeah, it makes a number of assumptions. So in psychohistory, there's this, complex systems concept of convergence. I would love for it to be true, but I don't think it's true. And so when I say convergence in foundation, it's not possible to predict arbitrary events. Asimov sets out this world where actually you can only predict that there will be, at a certain point in time, a certain event, and that no intermediate events are going to change that particular event.

01;08;24;06 - 01;08;52;13
Jeremy
And so the predictions are quite specific and staged out over time and make, an interesting assumption about the complex system that we live in. But actually, a lot of outcomes can be foreseen from a distance because of abstractions that you can use to, to make, prediction about how an equilibrium will form. Right. I think it's close to this idea of, well, eventually you're going to have it lead to this fulcrum that's like a central point.

01;08;52;14 - 01;08;54;07
Jeremy
And what happens at that point is incredibly important.

01;08;54;12 - 01;09;10;07
Aaron
But let's say let's say you can have like, near infinite number of simultaneous simulations. Don't you think you could, eventually, like, approximate at any like, let's say it's, potentially infinite number of simulations happening, you know, in real time. Pretty much.

01;09;10;09 - 01;09;27;11
Jeremy
Yeah. Just the simulation fidelity to reality is, is the most important thing. Like what? How likely happens if you create infinite simulations is you define the biases of the simulation as the likely outcome of interest. And so you need to make sure that that's not what's happening. Yeah.

01;09;27;17 - 01;09;29;02
Aaron
I mean that's a good question.

01;09;29;04 - 01;09;50;16
Jeremy
There's so there's two forms of uncertainty, right? Bias and variance. Variance is something you can reduce by increasing the number until you sort of like reduce the random variations, in decisions. That's what the infinity helps for. But even if you reduce variance to zero, if your simulation is not the same as reality, that is it has any bias.

01;09;50;18 - 01;10;01;22
Jeremy
The error will be totally whatever that bias happens to be. And creating, screen simulations that are totally fidelity to reality is really quite challenging.

01;10;01;24 - 01;10;24;23
Aaron
I think that's absolutely true. Jeremy. Super fascinating points. I know you got, you got a bounce. So thanks so much for your time and really appreciate that thought provoking conversation. The context and, and kind of leading us, leading us down this journey, super excited about what you're building, huge fans. And, you know, we're rooting for you as, you begin to, to solve many of these complicated problems.

01;10;24;23 - 01;10;26;22
Aaron
So thanks so much for your time. Yeah.

01;10;26;22 - 01;10;27;24
Chris
Thank you Jeremy.

01;10;27;27 - 01;10;29;16
Jeremy
Lots of love. And till next time.

01;10;29;22 - 01;10;48;25
Aaron
And for, those that hung on welcome to Net Society. You have, Chris preemie and special guest, Jeremy, Nixon, talking about all things internet, AI, technology, culture. I mean, Chris, we didn't even get into the most important stuff this week. Which would be the US hockey team. Right. So I guess we may have to leave it.

01;10;48;26 - 01;10;49;21
Aaron
Leave it there.

01;10;49;23 - 01;10;55;11
Chris
Well, look, we're marked safe from a deterministic future, so we don't know who's going to win gold.

01;10;55;13 - 01;11;08;09
Aaron
It's true, it's true. Although I think if we give Jeremy a couple more years, it feels like he's going to solve that for us. Or at least part of it. We'll see. Maybe we'll say accurate sports projections. But I hope you guys enjoyed this and I hope you guys have a great week.