Manifold

Joscha Bach is a German cognitive scientist, AI researcher, and philosopher known for his work on cognitive architectures, artificial intelligence, mental representation, emotion, social modeling, multi-agent systems, and the philosophy of mind.
 
Links of interest:

Steve and Joscha discuss:
  • (00:00) - Introduction
  • (01:26) - Growing up in the forest in East Germany
  • (06:23) - Academia: early neural net pioneers, CS and Philosophy
  • (10:17) - The fall of the Berlin Wall
  • (14:57) - Commodore 64 and early programming experiences
  • (15:29) - AGI timeline and predictions
  • (19:35) - Scaling hypothesis, beyond Transformers, universality of information structures and world models
  • (25:29) - Consciousness
  • (41:11) - The ethics of brain interventions, zombies, and the Turing test
  • (43:43) - LLMs and simulated phenomenology
  • (46:34) - The future of consciousness research
  • (48:44) - Cultural perspectives on suffering
  • (52:19) - AGI and humanity's future
  • (58:18) - Simulation hypothesis
  • (01:03:33) - Liquid AI: Innovations and goals
  • (01:16:02) - Philosophy of Identity: the Transporter Problem, Is there anything beyond memory records?
 
Music used with permission from Blade Runner Blues Livestream improvisation by State Azure.
 
 
– 
Steve Hsu is Professor of Theoretical Physics and of Computational Mathematics, Science, and Engineering at Michigan State University. Previously, he was Senior Vice President for Research and Innovation at MSU and Director of the Institute of Theoretical Science at the University of Oregon. Hsu is a startup founder (SuperFocus.ai, SafeWeb, Genomic Prediction, Othram) and advisor to venture capital and other investment firms. He was educated at Caltech and Berkeley, was a Harvard Junior Fellow, and has held faculty positions at Yale, the University of Oregon, and MSU. Please send any questions or suggestions to manifold1podcast@gmail.com or Steve on X @hsu_steve.

Creators & Guests

Host
Stephen Hsu
Steve Hsu is Professor of Theoretical Physics and of Computational Mathematics, Science, and Engineering at Michigan State University.

What is Manifold?

Steve Hsu is Professor of Theoretical Physics and Computational Mathematics, Science, and Engineering at Michigan State University. Join him for wide-ranging conversations with leading writers, scientists, technologists, academics, entrepreneurs, investors, and more.

Joscha Bach: What is the way in which you can distinguish made up consciousness from real consciousness? Then again, maybe all consciousness is made up, right? As long as you don't know that it's made up it's a tricky question. It's super hairy and I don't have a good handle on the phenomenological side yet. How to distinguish fake phenomenology of consciousness from real phenomenology of consciousness is both virtual.

Steve Hsu: Welcome to Manifold. Today, my guest is Joscha Bach, a cognitive scientist and leading thinker in the field of AI. Joscha, welcome to the show.

Joscha Bach: Thank you. Good to be here.

Steve Hsu: So we met at a fancy event over the summer in Frankfurt. And I was privileged to hear you in several lengthy discussions talking about AI, but also talking about your rather unique childhood and upbringing. And so I thought we would start talking first about your early life and how you got to where you are today and then branch into topics more central to AI.

So, tell us about your childhood in the woods.

Joscha Bach: Well my father did not get along very well with society, so he decided to build his own. And he was basically a first generation society founder, nobody who was trying to get anybody in, he had not discovered how to grow groups yet, or was also not interested in this very much. So he basically built his own kingdom in the woods.

And only converted his wife into supporting him and living in extremely beautiful. life in the Garden of Eden that he created. He was an artist, originally an architect, and he basically felt that life was about producing art, about interacting with his inner voices, with the things that motivated him. And so as a result, I grew up as one of his side projects.

One that he didn't engage very much with. He was not that much interested in the minds of children in an extremely beautiful place in which I was mostly left alone. And in a way that was a blessing because it was indeed stunning. And on the other hand, I was very lonely as a child. And so I got bored. I started to read very early.

And it was like a sponge and took up a lot of things in my mind.

And when I ventured out into the world to go to school I was already lost to the world in a way out there, there was a communist, Eastern Germany philosophy was vulgar. Like a Marxist dialectic, and there was not much that I felt I could learn for the first eight years in school.

And mostly because I already read the books, and the stuff that my teachers tried to impart on me were not that interesting or not that deep, and didn't work out. And so, in some sense, I retained a certain useful arrogance for quite long in which I felt that there was not much that other people could teach me and I would have to be self taught.

And this has not made me a much smarter person than others, but it has made me a more unique person than many others because of it. means that I started to project the world onto my own surface onto my own screen and try to get a perspective that works from first principles that allows me to understand how the world around me functions and why I'm different from the world around me and basically to realize that other people.

Due to their socialization, due to their trajectories and their psychology they have a different interface to the world and I would need to translate to mine if I want to interface with them. And unlike my father, I felt that living out there in a beautiful valley remotely was not in the cards. I would have preferred this because it's quiet, has no sensory overflow and so on.

And you don't have to put up with all the difficulties that civilization and cities impart on us. But I need the stimulation of other people and their interaction for the projects that I'm working on.

Steve Hsu: Now you're in the Bay Area. Do you, do you find it preferable to live in a city like San Francisco to living out in the woods?

Joscha Bach: At the moment I live in the South Bay and I feel that we have wonderful neighbors and amazing weather. It in some sense combines the worst of both worlds because it's not remote enough to be in nature and it's not urban enough to have lots of interesting people within walking distance.

So I drive up to San Francisco a lot.

Steve Hsu: Yeah. I mean, the advantage though is lots of people who want to think about AI or want to build AI are close by.

Joscha Bach: Yeah. That's why I'm here.

Steve Hsu: Now it seems that the teachers that you had didn't stimulate you, but I'm wondering of the books that you read, were there historical thinkers or authors that influenced you when you were growing up?

Joscha Bach: I read a lot of philosophy and also a lot of science fiction. When I was a kid, I didn't read much just for pleasure. I found that reading was often intrinsically interesting, but my main goal was to understand the canon of our society. I forced myself through the Bible and even Gandhi's The Wind as a young child.

And at the same time, I tried to read mathematics books or an Einstein biography and so on just to understand the background of our society. And I thought it's necessary for everybody in a way to get the canon down. Also, not just in terms of books, but also movies and so on. And later in life I was surprised that, oh, nobody, nobody, almost nobody is doing that anymore because I thought it's the goal of education to give you a deep understanding of the society that you're part of.

Steve Hsu: So, despite what I perceive to be your rather unique intellect and personality, you did go through the standard school system, and you actually have a PhD, so you went through quite a bit of school, is that right?

Joscha Bach: Yes. When I made the decision to get into academia, it was because I wanted to understand how the mind works and. I looked at the different fields and I found that there was no single field that would explain it to me. I felt most attracted to computer science because I was very much into computing.

That was close to how my own mind worked and I thought a lot about coding as a kid. And I was blessed enough to grow up as the first generation with home computers. So when I had my Commodore 64 at a very young age there was no software that I could use. And so one of the first things I had to write was my own text editor.

And then I learned how to make computer graphics from scratch. And I think with hindsight, it was a blessing because it informed my mind into how to construct a reality. in the computational system and the computational framework. So that was very close to me. I also found when I entered academia that I was very much at home with the style of interaction in computer science.

You could criticize a professor's proof in your first semester and everybody would be grateful, especially the professor, if you found a mistake in his proof. In philosophy, not so much. There was the opposite, because the criteria of acceptance are very much social and you're expected to be one with the group.

At least that was the case in university courses that I was in for the most part. It was slightly better in hardcore logic and analytical philosophy, but the most hardcore analytical philosophy I got was within computer science and logic education. And I also studied psychology and a number of other subjects that I found related.

So when I went to Berlin, I partly did this because there were a number of universities that I could attend in parallel. So, would I just pick whenever there was an interesting class, even if it was a Potsdam cognitive science class by a visiting professor or so, I would drive out and attend it. And when the class was not intense in my main subjects, I would just skip it and only read the textbook and show up for the exam.

So I was very self directed and was grateful that I had the opportunity to go into a lot of subjects. But I also found that my disrespect that I acquired as a child in public school for my teachers did not abate very much. There were not that many professors initially that I felt actually able to teach me something that was going beyond the textbook.

And I think good education has to do with interaction with the intellect of another person. So if it wants to go beyond the book, and the first person I met in this regard was Raul Rojas in Halle, I was not a student yet when I crashed his classes, and he later on became very influential in the development of neural networks.

moved to Berlin to go to Freie Universität. I ended up being at Humboldt University, and after the war came down, the department was mostly put together with new professors. The old professors, for the most part, didn't make the cut because of their connections to the communist regime and lack of technical abilities.

So a lot of young people were hired, and there are not a lot of students. There were 60 students in my entire year. And which means they had personal access to all the professors. And I felt that this allowed me also to shape my own curriculum in a way I could write my own exam rules in part.

So I could take philosophy as a second subject, in addition to computer science and so on.

Steve Hsu: This is a slight digression, but I hadn't really thought about the fact that you saw the wall fall from close proximity. Did that evolution of society surprise you? Did you anticipate it? How did, how did those events feel to you? Yes.

Joscha Bach: A friend of my grandfather a couple of years before the war came down and he predicted. That East Germany would be going bankrupt and would be crashing and I saw that it was unimaginable because nobody seemed to be seeing such a thing. East Germany was extremely stable in a way.

There was no inflation. All the prices were regulated by the state and there was almost no growth, very little progress. Our productivity had remained stagnant for a very long time because Eastern Germany had replaced the economic terror of capitalism, where you have a force to work, to survive this moral terror, because you had a right to work, but not actually a duty to work. People were incentivized mostly with moral slogans.

And you can imagine how that went because people ultimately look privately after what they believe to be incentivized to do, and if nobody has skin in the game. People in the big factories largely don't work very efficiently. And so Eastern Germany always had, um um, lack of people who could be employed and lack of workforce just because we were so unproductive.

And as a result of this system crashing, I think that the economic factor was the underlying thing, but there was also politics. What? Because everybody got promoted in the political system according to the Peter principle back in the economy. So basically you get promoted as long as you're good enough.

And once you're not good enough anymore to get promoted, you stay where you are. And the system didn't have the power to renew itself. Erich Honecker, who was the leader of Eastern Germany until basically the fall of the wall, was still a first generation leader. He succeeded Ulbricht and a couple others, but he was in prison under the Nazis.

He was a communist leader and was pretty much motivated with making sure that Nazi fascism wouldn't happen again. And a lot of the people in his generation were quite idealistic, but also were not able to update and to critically interact with what needed to be done to modernize society to keep the economy going.

And they're also somewhat limited for the most part. By the diktats of the Soviet Union, which limited the ability to get freedom. We had the most freedom, I think, in terms of organizing our economy in the Eastern Bloc, because we were in direct systemic competition with the West. And so Eastern Germany in many ways was better off than most of the other Eastern Bloc states, but still, we had very poor technological production.

We never went hungry. It was always cabbage and apples. And food, it was not often not super fancy, but we also had enough clothing and so on, but our cars were terrible and a lot of spare parts for anything that you needed were only available on the black market. So it was an interesting switch when we had the opportunity to join Western Germany, but I was naive and I did not expect that there was no opportunity for a third way.

It was highly politicized at this time, and knew exactly what the working class needed. It was in many ways, as young people had a more realistic world view. And I thought that the justice in a society, equality in a society, in which Eastern Germany scored quite high, is more important than the total productivity of society that is available to everybody.

And I was in some sense shocked when the working class betrayed the ideals of socialism and communism and also of the revolution, which was largely driven by people who were willing to risk their lives to go on the street. So idealists. This German opposition was largely very idealistic and the working class ultimately opted to be oppressed by the bourgeoisie again and exploited because they could see that West Germany was the control group.

And despite the grave inequalities which existed, there were no billionaires in Eastern Germany. the medium income and also the lower income for much, much higher, even though we had no homeless people.

Steve Hsu: So you mentioned your computing. Your first computing exposure was to a Commodore 64 and I'm, I'm actually a little bit older than you. So I, I think for me, it was an Apple 2 plus, which is even weaker than a Commodore 64. I'm curious now, I think. Yeah, I think a positive side of that is, as you said, getting your hands dirty and having to really build stuff yourself.

And kids these days, I think, could be intimidated because everything is so advanced already that they don't feel like they can get in at a fundamental level.

But I wanted to ask you, so coming from that beginning, can you describe your own AGI timeline? Evolved. So, in other words, when you were young, could you have imagined machines as powerful as we have today occurring in your lifetime and thereby us getting to AGI?

And so, how did your perception of that timeline evolve as you got older?

Joscha Bach: I thought it was obviously going to happen. I would have been really, really surprised if AGI didn't happen in my lifetime, because it seemed to me clear that computers had the capacity to think and to perceive, and there is no limitation to what you can do on a computer. And I also realized that you needed more memory and you needed a higher processing speed, but I was also very confident that this was coming along because I saw the speed at which computing developed.

And so to me, this time until we had electronic brains and so on couldn't come early enough. I thought this was a really exciting time to be alive. I was very glad to be born in that generation.

Steve Hsu: I want to differentiate between couldn't come soon enough. and still might take longer than my lifetime. so, for example, in my case, you know, the question was whether, how long Moore's Law would continue before it cracked out. And so there's always a concern that it might crap out too early, and then, you know speed ups in computational capability, you know go away and, and then you're stuck with whatever hardware, you know, you suppose it had happened in 1998 and we were stuck with 1998 hardware that that would be a real barrier to getting there in our lifetime.

But I guess you, you were never, you were always confident that we'd keep, we would keep improving our computational power.

Joscha Bach: Yes, I was pretty confident. And I also felt that if this thing is 10 times slower than my mind initially, that doesn't really matter. What we need to figure out is how it actually works. And I could see how much you could do with little hardware already. So just use more to try to distribute the load.

I thought there would be a way. On the science side, I would say that I had a tendency to be over optimistic about technology in my childhood. I thought that things would be happening more rapidly, not necessarily in computing, but everywhere. I imagined that there would be a lot of medical progress that didn't really happen in our lifetimes.

I would imagine that we would go to the moon again and to the, to Mars in a relatively short amount of time. We didn't. So there are many things where uh, technology was stalling and felt like it was keeping behind the times and was no longer progressing. And they, I could have made the inference if I had seen that earlier that a similar thing could have happened to computing, but it just didn't seem to be imaginable, simply because computers seemed to be so cheap to innovate on.

Steve Hsu: I think the problem for me was being a physicist and knowing what actually is involved in semiconductors, the semiconductor industry. There was a lot of terror all along the way. Like if you actually ask the engineers and physicists who kept Moore's law going, there was always every year on the five year map, there was always like a miracle had to occur.

Hopefully we'll get that miracle to occur, but there was never any assurance that it was going to keep happening. Yeah,

Joscha Bach: it's easy to see. You think, oh my god, photonics is going to come soon.

Steve Hsu: So now in 2024, I'm curious what your AGI timeline is. So how much of the remainder of your life is going to be used up before we actually get to, I realize the definition of AGI is a little bit fuzzy, but. But, how do you feel about the next, say, decade going?

Joscha Bach: I always felt that I don't really have a timeline. I found it difficult to put years on any of these things. Simply because I don't have an idea how to progress on something that people don't really know how to do.

The scaling hypothesis has changed the perspective of a lot of people. Simply because the scaling hypothesis seems to make progress predictable.

And the scaling hypothesis means that you get better by training with more compute and more data. And that is a very radical idea that most of, But the history of computing saw things differently, right? Mostly we are looking for a master algorithm. We are looking for something that is going to basically once you implement it, it's going to self improve and it's going to learn everything.

It's going to recursively do meta learning and so on. And there is not going to be some slow, predictable. Logarithmic learning curve that you are getting better, um linearly with adding um, next magnitude data. And I wonder if this is a good hypothesis on what happens in human minds. I don't really think that humans get better at thinking if they read more. And while this is not completely true, I mean, reading more does help you to understand more things. You have more things available at your fingertips to delve into new problems, to solve new issues. I think that there is something like raw firepower, this elusive G factor that determines how many balls you can keep up in the air, how well you are able to do inference and how early you discover epistemology. And so I, I still think that there is a much more sparse version of intelligence than the one that we are launching in the current foundation models that requires far less data to bootstrap itself. But it could be that the present systems are good enough to force themselves there. And maybe I'm wrong, maybe the scaling hypothesis is right and intelligence is just about scaling up processing of information.

Steve Hsu: feel free to say to answer no to this question, but I'm curious how, do you closely follow what's actually happening right now with the hyperscalers? Since, since you are in the Bay Area, I'm curious whether you hear rumors. I'm hearing a lot of rumors about a slowdown in the hyperscaling program? I don't know if you follow any of that.

Joscha Bach: It's rumors in this regard, but people always expected that there would be something like a plateau when it's harder to get new data.

Steve Hsu: Yeah.

Joscha Bach: And at the moment we always had this thing that people said, Oh my God, there has been a training failure, but progress has been happening. So I'm not sure how to, or what to make of this, but I technically expect that AGI is going to be a different algorithm than a transformer.

Steve Hsu: So you just mentioned, Joscha, that you thought transformers might not be enough to get us to AGI. Maybe you could elaborate on that.

Joscha Bach: Yes, I still suspect that there is going to be some kind of master algorithm, and I don't know that transformers are not enough. There is no proof that they are not. There is no proof that the scaling hypothesis is wrong. And the networks that we are building through learning are complete, but so there is no limitation that is intrinsic in these networks that we know of.

But there is also a strong sense that the way in which our own mind is different is very different. The transformer is not its own meta algorithm. The transformer was not discovered by somebody having a transformer in their head and then using the transformer to construct a transformer. There is something that is much more elegant about the way in which our own mind works.

But I am not confident that we can prove that you cannot learn how that is happening. And Another element of the or compliment of the scaling hypothesis is universality hypothesis. And this universality hypothesis is something that I first encountered in a text by Chris Ola, who worked at the time at OMI AI and studied.

Vision networks, especially, and they discovered that the number of the architectures have different learning algorithms and so on different network architectures at the beginning of training discover the same features. And so you have put a hierarchy of features that are being recognized in the neural network that can be identified across a family of different systems.

And this goes together with the observation that a team by Tommy of Tomo Pioggio at MIT made. That there is a similarity between the visual cortex and the organization of these neural networks. And so the idea of the universality hypothesis is that if you have a good enough learning algorithm that is general enough and neural networks are already in this category and you give it enough data and enough compute, it's going to converge to a model with an equivalent structure. And this is And in some sense, a mind blowing idea, if you were to end-to-end train a system on your own input and output for long enough, you will wake up in it. Right? Is this really the case? But if this model is producing the same output as you, it must produce internally a causal structure that is somewhat similar to your own causal structure.

And that means whatever it is that you are doing to be intelligent, it's going to, you know, discover that causal structure to a very large degree. And so the transformer itself might not be the mechanism that leads by itself to intelligence at the lowest level, but it can learn how to be intelligent.

So in many ways, you can also understand it as a meta learning algorithm.

Steve Hsu: So I wanted you to speak a lot about the concept of consciousness, and I wanted to discuss that with you a little bit. And I guess the goal would be to give the listener who isn't an expert on cognitive science or AI, some idea of how we might hope to define what we mean by consciousness. And then I believe you're in the group of people who hope that our successor intelligence, this future AGI or ASI, does have consciousness and we would be disappointed if it didn't. So maybe you could talk a little bit about the concept of consciousness and why it's important for the successor intelligence that you'd like to create.

Joscha Bach: Well, there is, of course, this thing that we have not succeeded if you're not building something that has the ability to become self aware in real time and observe itself, observing and experiencing present and presence in the world. And this present and presence is what defines consciousness introspectively.

We notice that there is a now that we inhabit and typically we inhabit it from the perspective of a self. That's somewhat optional. We can also have dream states or meditation states in which no self exists. You can only pay attention, or you can have an experience where just things are happening, but there is no I present, no self present.

But typically, consciousness is attached to the surface of the self. And the surface of the self is not the surface of our physical body. But it is the surface of an object that is simulated in our own mind that is the one that experiences it. Our mind is a machine that is composed of billions of neurons and probably relies on the interactions with many, many other cells.

So you have this organism with a few trillion cells. And these few trillion cells for hundreds of trillions of cells, they are shambling through the world as if they were a single agent. And that's how they're able to survive in the high degree of organization and specialization that they manifest and They need to have some kind of control model of what it would be like if there were a single agent and what it would be like for that agent to perceive the world at the sense at a level at which it makes sense of it.

And so it creates a simulation of a vote from the perspective of what the cells can extract in terms of meaning with respect to relevance of the behavior of the organism. And then it creates a model of the interest of that organism, which is the experiences, our motivation and emotion in this world. It's also generated inside of our mind, and then we have a model of who we are, and this is also inside of our mind. It's a virtual person, a story that your mind tells itself, not just a verbal story, but a multimedia story about what it would be like to be an agent that experiences. And this experiencing self is the surface on which consciousness is being projected.

So all the future dimensions of the world that are relevant from the perspective of that simulated agent are being experienced. And this experience is virtual. It's as if consciousness is an as if thing. It's a software thing. It's simulated in, in the mind. Consciousness is a simulated property. And I think that unlike Daniel Dennett, who argued that philosophical zombies and agents that are able to do everything that you and I are doing without being conscious, it's not possible.

I think it's entirely possible to have such a zombie. For instance, you can imagine you were puppeteering somebody else. And if that somebody else was being puppeteered is looking at some color, um and responds to this. There is nobody there who responds to this except you, but to do this, you don't need to perceive the color in the same way.

You can also use a technical device that is doing this in the same way as a self driving car does it. And you just build a classifier that doesn't have self reflection and is not aware of what it's doing in any way. So technically it is possible to produce human-like behavior on all levels. Without having this being that experience itself is as real as entangled to a sense of reality.

But how do you get to such a being that experiences such a reality? But you just simulate it, right? You simulate what it would be like if such a being existed that is directly exposed to reality. And so it's not that hard to get from the zombie to the real thing. You just need to simulate what this is like to something that all this is real.

And there is also a next stage that you wake up from this, where you realize actually, it's not real. It's a construction that my mind is making. And you can maybe identify that state as the generator. So you experience yourself not as a person, but as a vessel that can create a person. And you do not experience that you're looking at faces and trees and so on, but you look at geometry that moves and you at some point realize that there is not even geometry, but there are patterns that are being abstracted into geometry and interpreted as such. And the degree to which you become aware of that, that is the degree to which you construct your immediacy of this conscious experience.

And I find, for instance, when I write programming code, there is not much going on in terms of sensory qualities of qualia. It's very abstract and the older I look at the world, the more I realize that I'm looking at code. And so I suspect that this Direct immediate experience of something being real and us being entangled with it rather than us looking at the models that our brain is generating is something that is passing with the advancement of a mind.

And so I would expect that when we build self improving AI, that it might have some kind of intermediate stage. In which it is conscious in a way that is very vaguely comparable to ours, but it might transcend this much quicker than we do.

Steve Hsu: Now, I think it was implicit in what you said that the reason we have consciousness has to do with natural selection and for organisms like ourselves built out of a trillion cooperating cells to just be able to act effectively in the world. Is that your viewpoint? Would

Joscha Bach: I think that consciousness is an aspect of nature's master algorithm. Basically, our brain seems to have discovered something that makes it trainable to file this data and bootstrap itself in a way that is self organizing to get these cells organized By individual cells talking to their neighbors until they find some kind of order.

That is spreading out. I once created this metaphor of a puppy universe. So basically imagine that your skull is a dark room full of little puppies and all of these puppies can bark at each other and they listen to the box of the others. Initially they have no idea what these barks mean.

And the purpose of this is they use these parks to create communication across each other to process information. And the individual dog doesn't really know how to do that. It's a pattern that needs to emerge across the dogs. And to do this, they need to find a pattern of bugs that entrains new dogs. To speak this spark language.

And again, the spark language has some semantic context in the way that if, and you're making the right sequence of bugs, you're getting fed in the evening, if you make this in response to the other one, but what you're doing is very low level computation. It's nothing that has meaning with respect to what the organism is doing.

I think the individual. Neuron has no idea what happens to the organism, even though it's a little animal that is trainable with reinforcements. So it needs to be sensitive to some kind of semantic input that is predictive of whether it will still get fed. So basically it gets paid for being trainable.

And all together, they basically create this giant dog food factory that is making sure that they all get fed. And this dog food factory is walking through the world and collects ingredients for dog food. And that would not be possible for the dogs, if they're very individuals not working together in this giant tightly integrated collective.

And so what is this integration? Of information across them. There needs to be some kind of protocol layer that allows all parts of your mind to speak all the other parts of your mind and not get subject to some kind of Tower of Babel that needs to be some kind of coherence criterion that is establishing itself relatively early on and everything else rests on top of this.

Coherence criterion. And I think that what we observe in conscious experience is that we can only see things that are coherent. Stuff that is not coherent doesn't get integrated. We just experience noise or chaos. Right? So there is the boundary to the bubble of now that we experience in time and space.

And this is the degree to which we are able to impose coherence on the world. And maybe this is a different loss function. We find something similar in energy based models, where the energy is defined as a mismatch between features, and we try to minimize this energy, which means we try to minimize the mismatch between feature variables that have constraints across each other.

Steve Hsu: You say this framework for thinking about it is enough to explain qualia?

Joscha Bach: I don't think that there is anything mystical about qualia. I would prefer if we would call them sensory features as being projected onto an observer. From the perspective of that observer there is some discussion and philosophy about good quality are, and then people are trying to formalize them and define them as being, for instance, atomic, and then others come in and say, Oh, maybe atomic features cannot really exist or that don't exist, but ultimately, this doesn't really matter.

What matters is that they experience reality and things. That we can describe as feature dimensions and these feature dimensions are somewhat similar to the dimensions that you could find in embedding spaces. And indeed, this notion has been discovered by Rogers and Lesney, I think, in the 1950s, an American science fiction author who is, I think, underappreciated.

This was his best idea, that our mental organization is happening in something that we today experience as embedding spaces, a multidimensional space where every dimension is a feature that can vary along some variable, along some parameter.

Steve Hsu: I'm very happy to meet another Roger Zelazny fan. And who also thinks he's underappreciated and I think wrongfully kind of forgotten by, by this, this era. He's one of my favorite science fiction authors actually.

Joscha Bach: Yeah, it was very creative. He didn't write enough. I think a lot of what he writes is in some sense slop, but it doesn't matter. It's all good.

Steve Hsu: Yeah.

Joscha Bach: Has a good spirit.

Steve Hsu: Yeah.

Joscha Bach: So if, if any listeners want to catch up on him, creatures of light and darkness is the most epic one that I think he wrote and the embedding space reference in this one is the Prince of a thousand names.

A guy who was able to travel to every place, every planet that he imagines and the world that he imagines. And he never knows whether he creates them or just covers them when

Steve Hsu: He is a teleportationist, which is the rarest of rare talents.

Joscha Bach: In Princess of Amber which is a very long fantasy, it was, you have a, a character that is able to move between universes.

It's also in, in a similar way a teleportation is, but he's the main character and it's the main thing by changing the feature parameters by just one bit and it's being described as. Experiencing or treating reality is a lucid dream in which you slowly, gradually move the feature dimensions.

And I also strongly suspect that Elany has discovered this through lucid dreaming.

Steve Hsu: So I just digress a little bit away from AI. I'm very pleased now that finally we've reached a point where very nice realizations of Dune can be made. And I'm hoping that in my lifetime we'll see some great realizations of novels like Creatures of Light and Darkness and Lord of Light and the Amber series.

So maybe that'll happen.

Joscha Bach: I'm not sure if creatures of light and darkness can be turned into a movie. It doesn't have much of a story. It's mostly archetypes running against each other. And these archetypes are so archetypal that they're mostly ideas. I have this general who has been killed in many battles and is only made of spare parts and spare parts and will always be reassembled because there is just this archetype of this general.

But how do you show this well in a movie? I think it's possible, but in a way, I think that the text is the most powerful form that Creatures of Light and Darkness can take.

Steve Hsu: Well, maybe a generative model will be able to take the actual Creatures of Light and Darkness text and make an audiovisual,

Joscha Bach: Yeah, but our minds are already such generative models, right? We don't underestimate our minds, but what I would like to see is an augmentation of our minds. And Zalesny has this story, He Who Shapes, it's a short story in which he is describing how to use a generative AI for therapy purposes. So immersive VR that is responding to the interactions of the user and it can be controlled from the outside with lots of sliders and scroll wheels and so on by the therapist. And the world that is created is not necessarily super realistic. It doesn't need to be one that is conforming to the sensory modalities of a human being. It's something that is conforming to the modalities of the mind. So, so to dreams in general, and I would really like to see. The two generative AI at some point, something that is able to integrate with our mind so much that it augments every modality of what we can think, feel, experience, imagine, imagine you have this box next to you and it's observing you and many, many machine modalities.

I think that true machine perception has never been done at the moment. We are still learning on data that is created for human consumption at frame rates that humans can process. And That is the limitation that is not necessary for a technical system. And this means if our AIs are observing us at a much, much higher rate with much more processing power, they will be able to infer our mental state with relatively high granularity, I think.

And when they're able to do that they can also display that using Arbitrary modalities and this means you imagine something and what you imagine you see on the screen and this feeds back into you and leads to a much higher resolution than you and stability of your imagination that you could achieve yourself.

So AI will become a natural extension of your mind and I would really like to see such systems that allow you to think better, more clearly, more deeply when you're sitting next to them.

Steve Hsu: Yes. And I think you're saying this could be possible without drilling a hole in your skull, right? Is this how you're

Joscha Bach: Yeah, I'm optimistic.

I think that drilling a hole in your skull should be a last resort and it's probably helpful for some people. And I mean, I would probably get it done if it has the benefits, but it doesn't seem to be necessary because we find, if you look at the best human practitioners, they can infer your mental state with surprising levels of detail.

And if they're just human minds. So it seems to me that. the low rate at which our mind is working and the way it is at which it is working, neurons are roughly switching at the speed of sound, right? And the bandwidth of our mind is not that super high. I suspect that there is a way to infer a lot of that state just by observing your organism with non-invasive means.

Steve Hsu: Yes, so I wanted to come back though to zombies and consciousness so I think you gave a kind of counterexample to Dennett of Possibility of effective organisms that were actually zombies But the thing which will emerge from this current AGI program and which then maybe Could be a successor to us. Do you feel confident that thing will have consciousness?

Joscha Bach: If you look at the current foundation models, it's difficult to say whether they are conscious or not.

I think that a true Turing test for consciousness is very hard for similar reasons also that a Turing test is very hard to make decisive. I think that a true Turing test would require that an AGI explains how AGI works. In that case, we know that it has succeeded for this definition of intelligence, but for consciousness, I suspect we would need to have a system. We want to have a decisive test that can interface with us so that we experience it as an extension of ourselves. And that means that we can use the same criterion for our own consciousness.

Right? So it would be good enough if we have less the same degree of doubt that we had about the nature of our own consciousness. The other one is we get an operational definition and operationally, when I look at consciousness, there isn't the introspective aspect that you are able to report on certain features, which means the Presence in a now and the for flexible perception that you perceive yourself perceiving and so on.

But this is also obtainable for current LLMs. They seem to be able to simulate phenomenology so much that the LLM generated entity doesn't know that it's not conscious, that it's not real. And that seems to be the same for humans, right? On the other hand, the functionality of consciousness, there is a set of mechanisms in our own brain that leads to this side effect of this phenomenology.

And this set of mechanism is not necessarily present in the LLM for its generations, it's only necessary when it's able to, or that means required to simulate an interaction partner that is able to make that self report on the other hand, to which degree is this thing zombie to which degree is it only pretending role play to be conscious while going through some much more mechanistic procedure in which no self is being simulated.

And I think we can see this in the early attempts to create the chatbots that simulate a conscious human being and Blake Lemoine interactions the guy who worked at Google and was convinced that his LLM that he was talking to deserved human rights and was oppressed by Google to which degree was this just interacting with him and producing an interactive text that was predicting the next token in this text without phenomenology going on.

And what you could observe when you read the protocols that made things up. about its own consciousness. For instance, it told him that it was meditating for hours and how it perceived the room during that time. And it cannot perceive the room, and it doesn't have a sense of the passage of hours, so it just made this up.

And if it makes it up, you can also assume that it makes up the rest. So what is the way in which you can distinguish made up consciousness from real consciousness? Then again, maybe all consciousness is made up, right? As long as you don't know that it's made up it's a tricky question. It's super hairy and I don't have a good handle on the phenomenological side yet. How to distinguish fake phenomenology of consciousness from real phenomenology of consciousness is both virtual. I know that I remember having been in a conscious state that couldn't actually happen. But you can remember sometimes having been in a time loop or you can remember having had a dream that lasted for two hours in a very short amount of time in which you are actually asleep. so it's possible that all these memories of being in conscious states are being created after the fact and do not correspond to actually having undergone a succession of conscious states that were summarizations of your workspace into a single point that you marked in your mental protocol.

Steve Hsu: So if I understand right, you are actually working on creating an institute or some non-commercial entity that will study consciousness, the problem of consciousness. Am I correct about that?

Joscha Bach: Yes, I feel that it's something that is probably not best done in the current companies and also the companies are reluctant to touch it. I know that there are people within OpenAI and Google and other companies who want to work on this topic and are very interested in it. But there are political and technical business reasons why it would not be very smart for these companies to come out and say, this is what they're working on.

Unless there is a very clear use case and no ethical implication. And there are ethical implications to this and cultural implications. So I think this is a cultural and philosophical project and should be treated as such.

Steve Hsu: If I were a business manager at OpenAI or Anthropic and under pressure to raise the next 10 billion round, the last thing I would want is an effort within a company like this because the answers could be very disturbing. for the commercial goals of the company.

Joscha Bach: Yeah. I think it also rightfully would meet a lot of cultural opposition. And I think that it is an important project. It's, I think, a moral in a way to not trickle this project. It's arguably one of the most important philosophical questions that we have left. And it's one that can finally make progress.

Steve Hsu: Yeah. And I, I think it's also related to, you know, things like existential risk and, and such. So, you know, I, I would be, I would think about the future of life Institute, FLI and, and some of these effective altruists would want to support research on consciousness specifically, I guess, LLM consciousness, but, but consciousness in general.

Joscha Bach: Yeah. It's also an interesting question. If you can identify criteria for LLM consciousness, is there something like a test, some criterion that makes sense for us to distinguish whether it's conscious or not? There's often also the question, do we need to give a system rights as soon as it's able to experience itself as suffering?

And that depends very much on our culture. Of course.

I suspect that our caring about innocence and the protection of innocence from suffering is something that is relatively unique to our present time and cultural space that we are in. Right. from an evolutionary perspective whether your prey suffers or not, doesn't play a very big role.

It's to the goal of your prey to deal with the fact that it's being eaten alive. And if not, then it's, it's probably not yours. Right. And a lot of cultures still feel that it doesn't really matter whether the animal that you're going to eat. Experience its pain before you eat it. And our revulsion of this, I think, starts with parts of the Abrahamic tradition and especially Christianity, where innocence itself is a core value, the protection of innocence.

And this in many ways has spread and influenced a lot of other cultures. There were also, of course, Buddhism and so on and Jainism the idea that suffering of other creatures should be avoided. As soon as you are identifying with other agents you might want that they don't suffer. On the other hand, if you are enlightened enough to turn off your own pain and reinterpret the world in any way which you want, basically change your own source code with respect to the experience of pain and suffering. The perspective on suffering changes dramatically, right?

So if you were a God. like an entity that is able to create living beings in a simulation or in biology and so on. And you yourself know how to create an arbitrary experience. Do you think that the experience of suffering is intrinsically so bad that it needs to be avoided and the creatures need to have good experiences?

I think this obsession with good and bad experiences is strange, and your experience would be appropriate, not good or bad. The valence of your emotion doesn't actually matter. It is instrumental to what you need to achieve as an organism, as a living being, as a conscious entity. And I think it's much more important to give a system agency over what it feels like in a given situation, that it's able to make itself up to such a point that it can regulate its own emotions and its own feelings, its own experience of the world.

And the reason why we cannot do this for the most part is because we don't get very old as human beings. And if you are able to cheat too early, if you're able to rewrite your experience too early, maybe you opt out of the goals of evolution. And so. You should, I think , in principle be able to learn how to regulate your pain any way you want, but you need to have wisdom to do so.

We need to be understanding what are your larger goals. What is the larger game that you commit to playing? Why do you commit to this larger game? And then you can choose how to relate to the experiences that come with it.

Steve Hsu: Yeah, most athletes learn to tolerate what initially is extreme discomfort, but when they realize that it's helping them toward the larger goal of becoming a better athlete, they then embrace that discomfort that comes with training. So,

Joscha Bach: I wonder how many of them are kinky. I suspect that when you are a person who motivates yourself through pain, that means that pain gets associated with some positive experience and basically you cross your wires. But on the other hand, if you motivate yourself through pleasure you also become kinky.

You should be motivated by the outcomes, not by the emotions or feelings that achieving the outcomes instill in you.

Steve Hsu: I wanted to ask you what you think the long term forecast is for the interaction of humans with AGI. So you, I think you're not a doomer, right? You're,

Joscha Bach: No,

Steve Hsu: Okay, so you expect that humanity by itself is doomed.

Joscha Bach: There is no way in which a lot of humans in the present form, in the present way of organization, identities relating to yourself, will be around in a hundred thousand years from now, without AI. And with AI, it's very difficult to predict the future. So my P Doom with AI is lower than without AI. But ultimately, I think life on earth is not about us. It's about life on earth and about consciousness, and we might be able to create a new form of life. And I believe that this is ultimately a good thing. I'm not worried about paperclip maximizers too much. Of course, I believe that anything that reduces complexity is first of all, not interesting from my human perspective.

So I would say that's not good. I want the future to be more interesting than the past and the present, but evolution tends to go towards more interestingness. At least that's what it looks like.

Steve Hsu: Just to clarify that for the audience, because I've heard you say this before, and it's a, it's a somewhat unique view among people who think about things like PD. Your belief is that if humans don't develop more advanced intelligence on this planet that we're doomed for other reasons will kind of screw ourselves up and vanish.

Joscha Bach: We lost our belief in the future and somewhere in the 1980s, I think that our generation is the last one who had a glimpse of going into a positive future. And basically in our youth, late adolescence and so on, we observed this collapsing culturally, which means society as a whole, our zeitgeist.

Do not envision ourselves living in a beautiful ecology space flying cars, and it's going to be awesome and great. I don't know if this is different in China right now, but after the end of modernism in the West. We stopped having a future and we are not planning for a future as a society anymore, which is, I think, related to an underlying belief of our cultural zeitgeist that you're doomed, that there is an end to a growth.

That is an end to using resources in the way in which we do. And that we ultimately which we live, that is not sustainable.

Steve Hsu: I think that attitude in China is more optimistic, but I would only say that confidently in a shallow way that just because they're, they're societies on an upswing, they can easily imagine that upswing continuing. Um,

Joscha Bach: China is in its modernist phase, it seems. And the question is when does China become post modernist

Steve Hsu: But now what you were describing for the West is, is, is more the, the overall feel, but we still have the, the acceleration is among us. So there is a subset of people. Not, not all our age who still can visualize that beautiful future. Right. And still want to work toward that beautiful future.

Joscha Bach: Yeah, but although I have the sense that EAG is mostly a counter movement to effective altruism and not effective altruism in general, but the aspects of it that have given rise to the doomsday card.

Steve Hsu: Yes. Yes. so coming. So to explain, I was trying to explain to the audience that, you know, You actually view our way of minimizing pDoom is actually to build AGI,

Joscha Bach: Yeah, I suspect that the problems ahead of us cannot be solved without AGI. And AI is fundamentally a technology to solve problems that require information through better information and problem solving.

Steve Hsu: Yes. And so you, I mean, for you there's really no issue. It's just building AGI and the, you know, the expected outcome is actually better than if we don't build AGI.

Joscha Bach: Yeah, I think the issues are somewhat similar to the internet. The internet is seen as a threat, especially in the form of social media and so on by existing stakeholders. But Legacy media are terrified of social media and the way in which people can form their own ideas and synchronize them in which individuals can cultivate large audiences in which new phenomena are popping up and you cannot gatekeep them.

But from the perspective of society as a whole, I'm not sure that this is a bad thing. I think that ultimately over a long enough time spent within a few generations, we'll figure out how to use social media productively and society is going to change as a result, and it's going to be better. Maybe my optimism is unwarranted, but I observed that technology so far has not led to mass impoverishment. It has not led to the creation of larger economic divides and so on. It's contrary to what everybody seems to be saying these days, technology has not exacerbated social inequality, but it has actually, uh.

Being able to raise every boat, almost every boat on this planet. And so I don't see why this is not going to happen for AI. I think that there are a lot of problems that are going to be created by bad actors using AI, maybe even rogue AI and so on, but all these problems are solved using more AI on the other side.

And as long as there are more agents that want to build and want to destroy, as long as there are more. players that are interested in a long game that is coherent and sustainable, for that long, the forces of darkness will be defeated, I think, as they have always been,

Steve Hsu: Good.

I wanted to ask you, because I've never heard you talk about this, do you have an opinion on the simulation question?

Joscha Bach: Whether we are currently living in a simulation.

Steve Hsu: Yes.

Joscha Bach: Personally, I think the probability of that is low. That's because the universe looks like what I would expect it to look like if we are based in reality. Some things that could convince me that we are in a simulation is if we are observing phenomena that cannot be explained in any framework of physics, something that would fundamentally defy any kind of logic.

And sometimes people feel that there are things like this that they're encountering. For instance, if you talk to a Qigong practitioner in China who is cultivating his key energy until they're able to project mental states into others or use telepathy over short distances or manipulate their organisms in interesting ways.

I'm not sure that this cannot be explained by current physics. I think it might need to rethink neuroscience or the way in which organisms process information and in which ways they can exchange information. But I haven't seen anything that would compel me to think that physics is fundamentally wrong or doesn't work.

Maybe, maybe retroquisition would be such a thing.

Steve Hsu: Do you think it's implausible that, let's imagine a future where AGIs can create simulated worlds and within those simulated worlds are self aware beings maybe who aren't aware that they're, quote, artificial.

Joscha Bach:Yes, I think we could be living in the memories of an AGI. Bye.

Steve Hsu: those, yes, and those simulated worlds though could have realistic physics, right?

They might be, quote, realistic simulations of the base

Joscha Bach: I don't need to write if the AI is simply trying to recreate how it came into existence based on available data, maybe in future AI is going to read all the social media archives and whatever data is available. And then it recreates simulations of mental states of beings that got to this state. But the measurements of foundational physics can induce memories.

There is no need to make this physically accurate, the simulation, all we would need to have an accurate depiction of the mental state that you're in right now. And that doesn't seem to be very hard to achieve.

Steve Hsu: Yes.

Joscha Bach: So maybe the AGI is now watching this podcast and it's trying to recreate our mental states and here we are.

And maybe this is the only thing that ever exists of us in this sense.

Steve Hsu: What, what you're discussing is in, in the physics, theoretical physics world is called the Boltzmann brain problem, which I don't know if you've ever heard that term, but yeah.

Joscha Bach: But I think that Boltzmann brain is slightly Boltzmann brain is only there for one tick, for one frame, and then it falls apart again. And what we observe is that we have a memory of a sequence of states that would need to be engendered in this one tick. And while this is not impossible, it is very, very unlikely to a lot, right?

So Boltzmann brains are. Probably rare, in the sense that random arrangements of matter that randomly form some kind of swamp thing that is able to experience itself as something because it has been randomly imbibed by Statistical happenstance to be asked, that's conceivable, but the simulation idea is slightly different.

It means that there is a sequence of states that we go through some kind of longer process in which we are cogitating and perceiving and so on. But since our mental states are simulations of our brain, it's also conceivable that something else is simulating them with the same degree of perceived fidelity as we do right now.

Steve Hsu: Do you, do you think it's implausible that, let's imagine a future with super powerful ASIs with infinite energy resources, et cetera, do you think it's implausible that they would, that they would create simulated worlds, which in turn have sentient beings inside them?

Joscha Bach: No, I mean, that's what our brains do too. Our brains also create simulated worlds with sentient beings inside of them. For instance, in dreams at night, we create such simulated worlds and we have interaction partners that we can talk to. And

Steve Hsu: Yes. You don't, you don't find it implausible. And, and

Joscha Bach: I don't find it implausible. I don't bank on being in a simulation. It doesn't look like a simulation to me in the sense that the physical universe isn't simulated. I think it looks like it could emerge by itself. Yeah. And not some Minecraft, like, well, that has too many artificial bits, but for instance, Ed Shvetkin, who I got to talk to a few years before he left this plane, was convinced that there are too many parameters in our universe for that universe to be a random thing.

And he actually believed that the physical universe is probably in some server farm in the parent universe.

Steve Hsu: Yes. Yes.

Let me turn to your company, Liquid AI. because I think we've, we've been on now for, I think, over an hour. and I want to be conscious of your time. Tell us a little bit about Liquid AI and what is the key innovation there?

Joscha Bach: Liquid AI has been founded by a team of MIT postdocs from Daniela Rus and it started out with PhD work by Ramin Hassani, who had the insight that when you are representing programs in a differentiable form. To, um Neil, the program that you're looking for maybe you shouldn't be using neural networks, but we should be using a more efficient way to represent this.

And so he came up with an idea to build networks in which you use differential equations to describe the geometry of the function. And when you optimize the parameters of these liquid networks, you end up with something that has more expressivity per, per unit length. unit of compute. So basically a liquid neuron can express more than a neuron can do in a traditional neural network.

And while the mathematics is slightly more complex, the computer that you eventually need for these networks is less. It's not that the thing is learning things that you cannot learn with other networks. You can give it the same loss function, the same data, and it's going to converge to something that is functionally the same model.

But you are able to run the same model on much smaller hardware. And then we felt that it's a good time to build a company around this and to productivize it originally with some people at MIT who thought, Oh, and this, this actually needs to be funded and turned into a startup. And at the moment, I'm mostly working on the day to day things, like how to build an efficient fine tuning pipeline for this, and how to find customers that want to work with networks like this.

Steve Hsu: And what, what is your role? Is your title something like AI strategist or are you writing code? What do you actually do at Liquid AI? for having me.

Joscha Bach: Sadly, I'm not writing code and I, in many ways, miss writing code and maybe I should make an effort to get back into this. but I'm mostly looking at the space of AI companies at the moment and of the things that we should be doing that we have on our horizon. So I'm helping this, the strategy of the company.

And that's a role that is different from machine learning development. On the other hand, a lot of the practical machine learning development is extremely fine grained detail. Still, this is mostly an engineering discipline in which we tinker a lot, and you need a lot of cumulative small ideas to get things to work.

It's very much an empirical discipline, not so much a philosophical project.

Steve Hsu: If, if your entity, which is focused on setting consciousness were to get off the ground, would you go there and, and work full time on that problem?

Joscha Bach: I will need to find out how to split my time between this. I think that there is also a degree of complementarity, but I also feel that this initiative right now needs my input simply because not enough people are working on it at the moment. And I don't quite understand why the notion of machine consciousness is not more popular than it is because it's not completely outlandish and it's really, really important.

That it's I don't expect that it's going to be a mainstream thing anytime soon But there should be a critical mass of people who think oh my god This needs to be done We need to understand how consciousness works and actually it can be done with computers Computationalism is actually the right way to look at the world and to look at the mind and these people do exist But there are relatively few and far between, and I felt the reason why I have to do an initiative like this in San Francisco is because there is no place in the world where there is a larger critical mass of people who consider this as being a real possibility.

I felt that while I was living in Europe, that most people were very skeptical with respect to the possibility that machines can become intelligent, can become generally intelligent and extremely skeptical with respect to the possibility of machine consciousness. And even in Boston, I felt that people were quite uncurious about this. much easier to get support by claiming also if you are in philosophy that machines cannot think, it will never. then to tell people the opposite, that it's actually unlikely that they can't. And it's unlikely that they're in a fundamentally different class of systems than us.

Steve Hsu: You know when I was in the administration here at Michigan state, we were looking at recruiting a professor from Wisconsin. His name was maybe Tononi. Are you familiar with it? I might have his name slightly wrong, but yes, yes. So I think he has a proposal for how to define consciousness in a mechanistic way.

I don't know if you're familiar with his work.

Joscha Bach: He is trying to build an alternative to functionalism. And functionalism means that you can describe an object or that you describe object categories about what a thing does. The meaning of information is What are its relationships to changes in other information? And that is in some sense, very deep.

It has to do with our epistemology and how we construct reality. Imagine that there would be a water molecule that is behaving like a normal water molecule, but it's fake. Right? In all properties, you can even split it apart into some simulated or pseudo oxygen and hydrogen. But it's not the real deal.

In all measurements that you can possibly make, it is going to behave the same. I think that this notion of a water molecule makes no sense. Because all the other water molecules are constructed in the same way, as a regularization over observables. Every object that we're dealing with is a regularization of observables.

And so pretending that an object would be different despite having the same observables does not make sense with respect to that object category, right? If you want to create a meaningfully different object category, you need to propose a feature that is different. It leads to different behavior in some sense.

Otherwise it is going to be for an object that we can interact with and so on, the same object. And he thinks that consciousness is something slightly different. I believe that he is a deep philosopher who is posing as a neuroscientist. And he's a sleep scientist and he is seeing something that other philosophers haven't seen in the last 6, 000 years or so, which on the face of it is bad news.

Right. It's if you have an idea that is radically different in that nobody has seen the possibility of what you are seeing and the history of humanity. And it's something that you didn't get to by doing some arcane experiment in the lab. Okay. You're likely wrong still, it's necessary to do that, but you need to go out on a limb because what if nobody is doing it?

And the other thing is he cannot actually express this theory super well. He cannot write a formalization of IIT that satisfies me. Instead, the formalization of IIT seems to be designed for the incentives of a defective field.

It looks a lot like to me that somebody says, Oh, for a theory that is able to compete with the gold standard of theories, it needs to have axioms and mathematical formalisms and predictions.

And so he comes up with a section that he calls axioms and they are just a description of what he means by consciousness. It's not an axiom in any mathematical sense. And this description is not too bad. I have a few objections, but it's with the vein, which he. Tries to characterize consciousness. Maybe this unity and so on is a little bit overstretched because it might obscure a long tail of experiences that are at the fringe of what consciousness is.

But as a course, it's not too bad. Most theories of consciousness that other people are coming up with studies of the hand and so on are not that detailed. I say there are a bunch of features that I commit to that I actually want to explain. And so he is not visiting out there. That's nice.

The mathematization Well, to be advanced mathematics, it needs to have Greek letters in it. So it has this factor phi, which is describing the integration of information, but it's not clear what the scalar means. How does a high phi correspond to different elements of consciousness?

Steve Hsu: At the time, I looked at some of these papers and wasn't fully satisfied, but I have to admit I didn't exert that much energy at

Joscha Bach: So the way in which I understood phi, and I haven't revisited it for quite some time, so maybe my interpretation is not current anymore, was that it's some kind of Mutual information measure. If you take your cognitive system and you slice it every which way, you look at how much the information is correlated across the system.

And it meant people made objections to how FI is being computed. Then they just released a new version of how FI is being computed. And I got a strong impression also in personal interaction that the FI itself is a core part of selling the theory, but it's not a core part of the theory itself.

And it's more like a stand in until he's able to properly express what he means by the core part of his theory. And the predictions are, okay, information integration is high in the neocortex, and therefore the neocortex is conscious, and it's low in the cerebellum, and therefore the cerebellum is probably not integrated in our consciousness or is not conscious by itself.

And that is somehow what you would expect, but it's not clear that this is a prediction in the sense of positivist science that you didn't know before you made the prediction. And also led to this weird thing that Scott Aronson jumped on where Scott Aronson pointed out if you have some kind of XOR gate that makes a boring operation, you can optimize it for having a very high phi.

And this Tononi bit the bullet and said, yeah, maybe then it's conscious. And, uh,

Steve Hsu: Yes. I remember this now. I'd forgotten.

Joscha Bach: Yeah, I wouldn't want to have a theory to have these properties, but my main issue with his theory is that he is going out in public and says consciousness is not necessarily something that a digital computer cannot have. But a digital computer needs to be organized in such a way that it has an extremely high fi. So if you have a biomorphic computer that is constructed in exactly the right way, even if it's digital, electronic and so on, and works with bits, then it can considerably be conscious if it is satisfying all the necessary properties.

But a von Neumann computer can never be conscious. Because it's too linear, it doesn't have this necessary phi. And on the other hand, if you ask him, he does not deny the Church Turing thesis. So he admits that every digital computer can be emulated on every other digital computer, as long as it has enough resources, right?

As long as you have enough memory and are willing to wait for the answer. And so we now have this weird situation where you get your neuromorphic computer that says, Oh my God, I feel I have a phenomenal experience because I have this high integration of information. And it's when you look at it, you just see that there are bits being processed in this neuromorphic computer flowing through logic gates.

But in the gates that are organized in the right way. And now you emulate this thing in a simulation on your von Neumann computer, and it's going to simulate exactly the same flow of bits. At some causal level it's going to produce the same output only now it's lying, which means that the Numerology computer was also lying.

It was not, it was producing this output, not because of some property that the phenomenon machine cannot have. So now we are in the realm of epiphenomenalism.

Steve Hsu: So, Joscha, I want to close out with one last question.

It's a philosophical question. Imagine we've got the transporter from Star Trek working and the way it works is it scans you, destroys you, transmits the description of you to London. The transporter unit in London recreates a copy of you using that information. And then you go about your day, take all your meetings in London, and then maybe you can beam back at the end. Do you have any qualms about stepping in that machine?

Joscha Bach: I remember that I had a very different perspective in the past, right? I thought this kind of teleportation is very unsatisfying because you're clearly dying and then the clone of you is being revived and the other point. And this is not you, but I've come to the conclusion that identity isn't actually not a thing as only now.

And my memory of my past is what's stringing me together, these past instances. And there is not an actual continuity. It is between states, there is something else going on. And you are just gradually simulating your own continuity to a very coarse approximation. So, for instance, you could think of an electron as something that exists whenever there is an environment that affords the existence of an electron in the universe.

And so the electron is an operator that manifests wherever an electron shaped hole is there, so to speak. And this operation is happening. And an operator, like an electron, is something that doesn't have an identity in a similar way as addition doesn't have an identity. If you perform a plus on two numbers, this plus is not a thing that has an identity.

Identity or a continuity that the plus that somebody else is using is not the same plus or different plus it's just the same operator and I've come to the conclusion that everything in the universe that we perceive as an object, this is an operator in a way I am a complex operator that exists wherever there is a Joscha shaped hole in the universe, wherever the conditions for my existence are being met.

I will exist and experience myself as myself. And this is very difficult to achieve the conditions for my existence multiple times, because there's so many things that have to come together to get the integration of memories and traits and behaviors in a certain volume of space. So it can manifest, but wherever that is happening, this is where I am.

And so in my current thinking, I would say that they should not have qualms about using a teleporter like this.

Steve Hsu: Your answer is eminently logical from a materialist perspective, and I have to admit, like you, I flip flopped in my life on, in my opinion on this question, so that's why I ask it sometimes to

Joscha Bach: Yes, but I believe that the discomfort that we experience is a result of the way in which we model reality, not of reality itself. We basically tend to assign timelines to objects. This is the way we create identity. And the benefits of having such an identity is that we can do credit assignments that we can, for instance, assign rewards for us for present decisions in the future.

And we can evaluate our past behaviors to prove present behavior. And we can track other objects in the world that have such a history and that are evolving. And there are objects that are evolving and it's easiest to think or to model, to experience that these evolving objects are not just different instances that are updated slightly because the universe updated them, but made a new release.

But instead we perceive them as the same object that is intrinsically evolving. And that's, I think it's just an inaccurate representation.

Steve Hsu: Great. So let's end it there. And let me ask you, is there any place where someone who's interested in your ideas can find sort of an organized introduction to your thinking?

Joscha Bach: I haven't really made one. Super organized introduction to my thinking. And it's something that should probably be done. You can, if you are not just interested in following me on Twitter, um look at my YouTube page where I collected most of the talks and podcasts in recent years. in. For the past few years, my life has been very overwhelming.

I have kids and so many projects going on, and I feel that I have difficulty setting aside time for long form writing. And I hope that at some point I am able to sit down and write a long book in which I explain most of the things. But for the time being, the best way to get informed about the mainstay of my ideas is for instance, to listen to the series of talks that I made at the chaos communication Congress, in which I tried to identify many milestones of my ideas and thinking and put them out into one hour talks, so it's not that hard to consume and you can also find them organized on YouTube as a playlist.

Steve Hsu: Great. I will put a link in the show notes. Thanks again for your time, Joscha. It's, I hope, I hope to see you in person sometime soon.

Joscha Bach: Yes. It was a big honor to be on your Manifold podcast. Thank you.

Steve Hsu: Cheers.