How is the use of artificial intelligence (AI) shaping our human experience?
Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.
All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala. In this episode, we're pondering staying human with the co-founders of Artificiality, Helen Edwards and Dave Edwards. Helen and Dave, welcome to the show.
DAVE EDWARDS: Thanks for having us.
HELEN EDWARDS: Yeah, thanks. It's great to be here.
KIMBERLY NEVALA: Now, I have really enjoyed having the opportunity to engage with your work. And something that comes through very clearly is that you have a shared passion and commitment to the topics that we'll be discussing today. But one of the things that also stands out, and you've also mentioned, is that the real X factor in your partnership is the fact that you come at it from different angles. So I'd love to start by having each of you talk about how your individual experiences have most shaped and differentiate your work from your partner's.
HELEN EDWARDS: Do you want to start with me?
DAVE EDWARDS: Yeah.
HELEN EDWARDS: He wants to start with me.
DAVE EDWARDS: Yeah, I do.
HELEN EDWARDS: He's pointing at me under the table.
DAVE EDWARDS: [LAUGHS]
HELEN EDWARDS: Well, my brain is wired for process. And when you're wired for process rather than outcomes, you look at AI quite differently than people who are obsessed about benchmarks because what you're interested in is what's actually going on in that journey to an output and then ultimately to an outcome.
And so the way I look at AI is very sort of evolutionary. It's very complex. It's very much about the dynamics that this cognitive technology, this other intelligence, this alien-like, this other foreign being, what actually happens when it enters our cognition, which is what is happening. And so I think about it very much as absolutely intrigued by what it's doing to us but also intrigued about how it's changing because of us.
So I have a very systems thinking way of-- that's my mental model for AI. And so I bring that-- I'm just so curious about this whole system that we're entering into. I am naturally humble about what we can predict and what we can't. And the reason is because we know that this will be emergent. So that's very much the lens I bring is sort of process, systems, complexity, biological, ecological, and cultural.
And then Dave and I cross over in an interesting way because I'm the science geek and he's kind of the humanities geek with a really interesting way of thinking about how that interfaces with us. What is the boundary, the surface between us and these machines? So over to you.
DAVE EDWARDS: Over to me. Yeah, it's interesting when you think about it, that you're wired for process. I think I'm probably more, if I was going to try to find the comparison, more wired for experience.
So whether that be in my past history of being an analyst, trying to understand the growth of the internet, I was always looking at it as like, what's the experience of these things in these new products and systems? And which of these do I think are actually going to take off or not? And when I've spent all of my years building products, it's about what experience do we want to create for people?
And so this journey is a combination of the two. We started it when it was quite early. We started analyzing and looking at the AI space more than 10 years ago, well before the advent of LLMs.
And we were trying to figure out what is the experience of using these massive predictive systems? What's the possible potential for these tools to change our lives, to change how we work, to change how we relate to each other?
And so now I come at it from a-- we can think about what these things are doing to us and what that experience is. But very much in the last couple of years, changing our mindset to thinking about, well, maybe we need to think more about how do we design these things differently?
Because what we're seeing today is some experiences which are really positive and wonderful. These tools can do wonderful things for us. But there's a lot of danger signs as well, and a lot of mismatches between what they do and what we'd like them to do. So I come at it from more of a sort of product and design perspective about how might we do this differently?
KIMBERLY NEVALA: This is great. Now, I want to quickly just say for the audience, who are quite used to me banging on about the fact that all AI is not generative AI, and generative AI is vice versa. But for the purposes of this conversation today, we are going to use the term AI and generative AI pretty much interchangeably. So I'm just going to stipulate to that upfront, and I'm sure our listeners or listeners will all give a sigh of relief.
But that being said, I originally found your work on a paper, and I want to say that the title was around a concept of cognitive sovereignty. And I believe that's now morphed towards something which you call human authorship. So I would love to get a definition of what this is and why it's important. And maybe we can do that by starting with talking about the shifts the individuals can track, or that you were observing in yourself and in others as they were engaging and interfacing with these AI systems.
HELEN EDWARDS: OK, cool. So human authorship is a softer term than cognitive sovereignty, but we use them interchangeably.
Anyone who follows our work will begin to notice that there's actually two levels of language. There's the technical level, and then there's the non-technical level. And I think that's really important because this is a cultural technology that is changing the way that we think and show up for other people, so we need to have really accessible language. And honestly, the most accessible language I've seen in AI for a long time is the term vibe coding. People kind of instantly get it, right?
So when we talk about human authorship, the reason that we would choose that particular term is because it evokes a different emotional signature in people. Cognitive sovereignty is really clear. It's about what's going on in my brain? Don't take that away. Human authorship is a slightly more positive angle on you are staying the author of your own mind. It is much more aligned with original thoughts, original thinking, and concepts that come out of the human rights movement, in the EU about the right to a future tense. It feels more human. Cognitive sovereignty, though, is we do use those interchangeably.
So what is it? It is the ability to author your own mind, to author your own life. To link to that concept of the right to a future tense, that you don't have your choices narrowed by any kind of predictive system. There's a very deep set of scholarship around that. It's all fascinating stuff.
But why it matters in this world of generative AI is that this is a true cognitive tool that we can think with. It can do our thinking for us. And we have a biological imperative to be more efficient, to offload our cognition. And if we just sat with the default, with the biology of all of this, we'd put as much as we possibly could out into the world in these tools. Now, in the past, that's been a pencil and paper, or it's been a calculator, or it's been an iPhone. This is great. You can see that trajectory. We want to be able to offload our cognition for efficiency reasons. But at the same time, humans are driven by meaning and purpose and mattering and goals.
And when we put our cognition out into the world, that's great, as long as we remain the people who, as long as we remain the agents, that set the goals, that know our purpose, that are striving for something. Now, the problem with generative AI is that it mimics us so perfectly, in many cases, that the fidelity of this cognitive offloading is pretty damn high, and we'll take that trade.
So how do we develop this counter, which is developing this meaning-making, developing this purpose, developing this ability to set our own goals? Now, all of that is what is the counterbalance to the evolutionary force of just making ourselves as dumb as we possibly can and managing to survive and mate and pass on our genes. Culture is the thing that is part of this countervailing thing. And so when we talk about human authorship, what the data says and is really quite clear in my mind on this, is there are three things that help you preserve your human authorship.
The first one is awareness. It's knowing what you're using these things for. It's knowing how you're using them. It's very important to understand that that awareness, that self-awareness is something that can show up later when you've made a mistake, when you've been embarrassed that you put a typo out in the world that is so egregious, like 1 plus 5 equals 25-- those kinds of things, that embarrassment, or that sense of you've got so far in with these tools that you've just lost track of what it is that you even think-- these very common experiences. So that sense of awareness upfront.
The second one is agency, is your ability to choose. And sometimes, those choices are things that you can do. You can make those choices actively yourself. Other times, you can't. They are restricted by the tools. I can't easily put my memory of everything in ChatGPT across to Claude. It's not as good as I want it to be. I can't use it at work, or I have to. Use it at work-- the choice-based things.
And the third one, which is really the most important, is accountability in showing up for others. Because humans have never done anything alone. You put us out on our own, we're going to all die. We are a collective project. So our accountability and our use of AI is inherently relational. And so you can't author your own mind if you're a mind alone, if you're a brain in a VAT it doesn't work. You actually have to get out there and be accountable to others.
So it's those three components that make up cognitive sovereignty, make up human authorship. And it is the foundation for how we should work with AI. And without those, we drift. And it's very clear. We just drift.
KIMBERLY NEVALA: And as I've been engaging with your work - I think I shared with you last week that I spent, unexpectedly, I went through you've been putting a book online, and I started reading, and I couldn't stop.
And what I found just so lovely about that is there were just areas where right now, I'm not persuaded by this argument. And we may even fundamentally disagree now, with the right to change our mind later, about the actual nature of the system itself. Is it thinking? Does it actually act with agency?
But it almost doesn't matter because we can still agree that we have to attend to the ramifications of its use. Because it does influence how we perceive and how we experience it. So when I was reading what I'm now calling the tale of the chronicle and the cube - at one point I was thinking, it's the fable. And I was like, no, it's that implies a morality I don't actually intend to. Or religiosity.
It was a really clear encapsulation, I think, of what you've referenced in some of your research. Where some people are really able to use these tools in really productive ways without falling prey to some of these cognitive traps, if you will; and in other cases, unwittingly fall down a rabbit hole and lose track. And what's interesting about this, the chronicle and the cube tale that you tell is that, in this case, it was one person using one tool, and that one person being you.
So I'm interested. Could you tell us a little bit about your experience, and how that has either helped focus your attention on this question of that unwitting drift or being designed by the system defaults? And Dave, I'd love to know what your experience and takeaway was as the human in the room, if you will, during those encounters.
[LAUGHTER]
HELEN EDWARDS: Yeah. The smart guy. So the story - and people can read it in the Stay Human book that's been released weekly chapter by chapter. It started with I have this brain that goes from A to D. Funny I say process, and then I skip all these points in the process.
So I'm very prone to overrelying on these tools. I just am because I feel like I don't ever have to explain these steps to Claude. Claude just figures out how to come along with me. So I'd done the chronicle research, which was the foundational research done over many years about how people's thinking and identity and meaning-making is changing with AI. That was done with a lot of analysis using the tools as almost a CPU for the data, right? But it was done with an awful lot of peer review, human peer review.
We had incredible reviewers to help us with this and advisors. We had Steve Sloman at Brown. We had Barbara Tversky, who is just so uber amazing, Don Norman. These are names that people know, plus a whole bunch of other designers and other great human thinkers, all credited, all pushed and pulled.
It took months to get that research out.
And it came out with these three dimensions, the thinking and the identity and the meaning-making. And we had a workshop coming up. And I was thinking, oh, I need to take this to the next level about what it means to engage with AI on this basis. And Dave was busy doing other things. I don't know, he's just amusing himself or whatever. Anyway, so I had to do it on my own. And that was fine. I was deeply into this.
And I was really into using AI to help me think through this. And I got myself in the most ridiculous tangle. And I was so tangled that I couldn't even see the tangle. I'm inside the ball of yarn. I just could not see anything clearly at all. And we had this two hours to go before we actually had to present this. I mean, we can run things close to the line sometimes, but this was really out there. And I had 20 dimensions of this and this checklist, and it made no sense.
So Dave said, hey, we've got two hours. What are you going to present? So I showed it to him, and he just--
DAVE EDWARDS: No.
[LAUGHTER]
No.
HELEN EDWARDS: He just looked at me like I was an absolute idiot.
DAVE EDWARDS: And I think it's an important story, and she tells it very generously in the book, generous to me, and crediting me with a lot. But it is a part of our workflow.
And I think it's a good example of how humans work together, which can also provide insight in terms of how humans work with machines because there's still a human. The machine is different from a human. I'm not trying to imply that a machine does the same thing as human, but the human in that partnership is still the same human, right? And so how you relate to other people can have an impact on how you relate to other machines., those kinds of things, or two machines on other machines.
But one of our patterns is she goes off and does all of this incredible stuff and pulls all these things together and she can link the unthinkable. And she comes to me with this, here's how I'm going to tell the story. And I go, OK, let me try and actually make this into a story that somebody else can understand.
And the first thing is, can you explain it to me? Because I've not been involved in all of this, whatever it is, whether it's this process or historically. You should see her bookshelf and the quantity-- not this one, but the other-- well, both of them. The quantity of books that she reads and all the ideas that she can pull together from all of these different places.
And then she tries to present. I go, I don't understand. So you first got to tell it to me and then I become a bit more the storyteller of how do you actually put this on slides? How do you make sense of this? What are some words and names to put to it? But yeah, this was a bit of a spaghetti that had way too many dimensions.
HELEN EDWARDS: More than usual.
DAVE EDWARDS: I was thinking this was going to have sliders and weird things. And I said, well, wait a second, you talked about three different things. You're talking about high and lows. And what if we just did a 2 by 2 by 2, right? And we put this into some three-dimensional space, which worried me at the beginning because representing anything 3D in a slide is something that's, for 30 years, has been difficult.
It was still the right, it ended up being the right, place to put it and the right way to try to explain it all.
But I think that the experience of us working together-- so in case your audience hadn't figured out, we were also married. So we're married. We have kids. We work together. We've done this for a very long time. That kind of really in-depth cognitive pairing of people, I think, has given us a different perspective, perhaps, on what it's like to have some form of cognitive pairing with a machine.
We're so used to things that are like, sometimes it's hard to pull apart whose idea was what. In this story was actually kind of easy. We knew where the line was. But there's a lot of other things where there's names that we've come up with, language that we use to describe things. And sometimes, we'll remember it differently about whose idea it was. And it doesn't really matter. In the end, we're trying to come up with a way of communicating on behalf of both of us.
That same interchange can start to happen with a machine. And that blending of ideas and concepts is really powerful. And it's how humans have become so successful as a species. The question is, what happens when that pairing starts to happen with something non-human?
HELEN EDWARDS: And in that moment, when he just took those three things and put them on three dimensions and said, well, what about this? And I was just like, oh, I wish I'd talked to you about this yesterday or last week or whenever.
And it was just such an important reminder that you can really bond so significantly with this intelligence that you forget to step away from it and check in with other people who can make sense of it at a human level because I honestly think that it all made sense to Claude because Claude can think in that way.
But it didn't make sense to a human, and that's what matters. We're not doing this for the machine. We're doing this for each other. And the current discourse and the current conversation around AI tends to forget that. There's no point having this if it doesn't make us smarter.
KIMBERLY NEVALA: Yeah, absolutely. And I want to come to what that might mean for how we design this. But as you've been talking, a question I've circled around a lot of late and I think particularly pertinent to some of the work in research that you're doing now, is this question of-- I mean, right now, we could spend hours talking about what people's beliefs and what the basis and their rationale for is this a thinking tool? Is it intelligent? Does it actually have intentionality or consciousness?
I think those are important conversations to have. We're not necessarily going to-- well, I would like to, but I'm going to restrain myself and not take us down that path. But even if we stipulated that the machines are intelligent or a new form of intelligence. Or we stipulated that they are not, we're just projecting intelligence into the machine, and it doesn't exist, does the research and learnings and the calls to actions that you guys are developing still stand?
Or is this at all predicated on a belief that-- again, I guess, maybe this gets to the perception and experience side of the house, Dave. Which is, is it really just important to acknowledge that we do have a perception and a particular experience that we have in interacting with these systems? And while this other element is not unimportant - I think how we think about that may lead to different remedies or ways we want to approach it - but at the end of the day, that perception and experience is changing things. And so stipulate that it's intelligent. Stipulate that it's not intelligent. This research and these learnings still matter.
HELEN EDWARDS: Yeah, well, I'm going to just jump in because I want to hand over to you. I want to say something upstream.
We had a conversation many times with a neuroscientist based in the UK, and his name is Stephen Fleming. And he studies metacognition. This is years ago. And it alerted us very, very early on, I think, before ChatGPT was even released, that it doesn't matter if people think there's a degree of consciousness or a degree of intentionality. Nothing else matters. You've got to actually work at the level of that perception and then take it forward from there. So that is why I think we had quite an early start on this before the evidence for these models actually being intelligent. I'm just going to let that word sit there because I know that you don't want to go down that rabbit hole.
But the perception of intelligence, the perception of intentionality, the perception of consciousness, that's actually all that matters. And that is actually what we see in the data, that it is changing people's thinking, whether or not they think it's intelligent, or whether or not they think it's conscious. So the experience is kind of uncorrelated with whether you think these are stochastic parrots or an alien intelligence that's come to find us from Mars, whatever those spectrums are.
DAVE EDWARDS: Yeah. I mean, my experience as an individual with another human does not relate, is not impacted by, whatever level of thinking I think that person's doing or intelligence. I might because that's my experience as I'm reflecting them. My experience of my dog doesn't change whether we discover that there's some new language going on with them, or something else that we don't understand today. It is truly my experience.
So my experience of my interaction with another human, with another species, with a machine is about me. And that's really what we're studying. So whether you ascribe intelligence or consciousness or whatever else you want to ascribe to a machine doesn't necessarily change my experience of them.
It might if you were really into it, and you get this idea that, well, we now believe that there's a different level of intelligence and an octopus, so I see a picture of an octopus. I might think about it differently because my experience now is like, wow, what's that like to have eight tentacles that all think? And wow.
I mean, so my experience has changed because I've learned something new about that species. But if I'm in the water, and it comes up to me, I think my experience is kind of going to be the same, I'm probably going to be terrified unless it's really tiny, right?
I do think, though, that - I mean, I understand the caution about the pathway - but I will say one thing about this question of whether these things are thinking, and whether they're intelligent, whether they're anything else. We can have discussions about that and debate it. I think there's two key things that I think are important.
One is the common use of those words is mostly anthropocentric, right. We're thinking about what it means to think and what it means to be intelligent, what it means to be conscious from the human perspective, but we're not the only species on the planet that thinks and is intelligent and has consciousness. So the boundaries of those words can be quite constrained if you only think about it-- if you just put the word human in front of it. Because, for sure, these machines do not think as humans do, or they're not intelligent as humans are. I think it's silly for us to try and make them to be like human intelligence. We have plenty of humans. Why are we making artificial ones? Let's do something different with them. And their consciousness, if there ever was to be consciousness A, we'd never really because we don't really know. That's one of the great things about consciousness. But it would be different.
The important part there is also, though, is that these generative AI systems are different from any other machine we've created before. And I actually think that's the most important thing to embrace. Whether you want to call it thinking or not, whether you want to think of it as intelligence or not, it's different from any other machine you've ever done before.
So we've had this journey a little bit that led up to it. For 20 years, we've had large-scale predictive systems, which prior to generative AI, were also called AI. So Google search and your social feeds and all those algorithmic systems all fall under the broad category of the academic discipline of artificial intelligence. And those things have become sort of you. But even without that, there is a key difference with these generative AI systems because we've never created a medium before where we're transmitting something that wasn't originally created by a human.
So if you think through-- think of media studies and McLuhan aspect of it. We have made books. They were transmitting ideas from that TV. Before that, telegram radio, et cetera. We get to the internet. We're all about transmitting things that were created by humans. We look at it. There's writing that's been written. There's a picture that's been created. And it's starting to get a bit blurry when you see, well, my social feed is organized by an algorithm. Yes, but that's all representing something that's from a human.
This is the first time where it's not just that the medium is shaping the message. It is that the medium is actually creating the message. This is the first time we've had a machine that will actually create a message on its own. Sure, it's been trained by the history of human documentation, but the way I speak and what I know and everything else has also been trained, if you will, on human generation. But we've never had a machine that can actually generate a message to us.
And that changes, A, what the machines can do. But B, it really changes what our perception of the machine is. Every other time we've gone to the computer, we see something that some other person has created. And so it's been shaping what transmits, but it's always about receiving another human's message, but this is now different.
So these windows that we look at now, these little glass panes, are giving us something that's really different. And I find that to be something that, in some ways, we're kind of skipping past when we're thinking about the impact of these, to really think about what that means.
KIMBERLY NEVALA: Someone said to me that all the language is tricky here. As well as I think there's quite a spectrum of opinions on whether they are, in fact-- are the machines themselves creating something that's new and that wasn't actually put together by humans? Or are they actually just compiling something that's a compression, a compressed image of intelligence that didn't-- or thinking in thoughts? And in some cases, they are blatant plagiarizers. And sometimes it's just mixing things up. And so that, I think, is where then a lot of these conversations go to die in some cases. Where we'll go around this wheel on and on. And again, I do, I think it's important. But again, it can wrap us around an axle. And so it's—
DAVE EDWARDS: Yeah. It definitely does. And in some ways it wraps us around an axle because what we're experiencing now is this philosophical rupture of what it means to be human.
And so every time we talk about this and we say, well, is the machine intelligent? There is some level of conversation in your own mind or in some or in the general number of people that are talking about it that feels threatened by that. Well, if that thing can think, what does it mean to be me? Am I still as interesting? Is my life still is important? Am I still as unique and fascinating? If I stop and realize that, in a lot of ways, I'm a token predictor, too, right? I'm trying to figure out what the right next word is to make a sentence, to be able to communicate to you.
And I've learned all of those things through the history of my speaking and writing and reading and listening and all that other stuff. And is it the same? No. But is it is it something that's different than what a machine has ever been able to do before? For sure. It's definitely really different.
One of the people that we love to work with and reference is Michael Levin, who's a synthetic biologist at Tufts. And I bring him up now because he's the one that really opened my eyes to the idea of diverse intelligences. Not just that other beings - like my dog - is there intelligence in my dog? Which I think there is. I think dogs are intelligent but really different from humans. And by the way, that doesn't threaten me, but the dogs can do things that I can't do. My dog can sense threats that I don't understand. Watch a crazy program out of New Zealand called country calendar and see what sheepdogs can actually do. And I tell you that humans cannot do that stuff.
But he sees intelligence at very different levels of systems. So in very different biological systems. And once you understand and see that the concept of intelligence is much broader than just humans, I think it allows you to just relax some of the boundary of is it intelligent or not? But to rather ask the question of how is it intelligent? In what ways is it intelligent? And in what ways might that intelligence be useful to me?
KIMBERLY NEVALA: Yeah, yeah. And well, and, again, I think this is where-- and I think even my own thinking is unsettled on this point. And maybe that's the right place for it to be. Because I also will push back a lot when people say, well, people just don't like this idea or don't want to engage with it because they're afraid of it. And I think that's also sometimes a bit of a, it's a way of shortcutting, I think, the discussion. Or just saying, like, well, clearly you just don't understand, or clearly, you're afraid.
Because the flip side of that could be like, well, because we are somewhat sensitive, fragile creatures, we will feel better about the fact that this experience and our perceptions of this and how it actually influences us. Because if it influences how I behave, and if it speaks, it must be smart. If it puts out language, it has to be intelligent. It's not just a compressed reflection because if it wasn't, then, clearly, I would have the control over this piece as well. And so anyway, so now we're going all the way down, and you guys have thought about this--
DAVE EDWARDS: Right down the rabbit hole you didn't want to go down. I'm sorry.
KIMBERLY NEVALA: Well, it's a rabbit hole, right?
DAVE EDWARDS: It's a good one. It's a fun one.
HELEN EDWARDS: I respect everyone's right to feel how they feel about AI. And people shouldn't think that I don't feel all of these things, too. Everybody feels them at a different place in their lives for different reasons.
Michael Levin's work and Blaise Aguirre's work and there's many other people surrounding those two doing similar sorts of work. Tom Griffiths is another one. That there's pretty overwhelming evidence, in my mind, that we have invented an intelligence. And that is a true philosophical rupture. For some people, that is absolutely terrifying. They hate it. They hate the very thought of it. For other people, it's a oh, well, who cares? I don't really know. And for other people, it's just useful. It's useful.
And where I sit on that is I think it's awesome in the true meaning of the word awe. Which means it's both incredibly amazing positive and incredibly amazing. It's terrifying. It's that feeling of true awe. You stand and you look at a thunderstorm approaching, and it's like, wow, that's awesome. Holy shit, I got to get out of the way. So there's that. And I like the way that awe breaks our brains and wonder puts it back together again.
So we wonder about what this intelligence really is, what it means, how much of it is because it's just reflecting everything that humans have ever written. What is missing from everything that humans have ever written or put into images or put into video? No one really knows the answer to that. What we do know is that there are circuits inside these models that are societies of thought, that are actually regions that are forming within models. The mechanistic interpretability work is kind of overwhelming on that. That's really clear evidence.
The other thing to say on it is that our thesis is very much driven by the diverse intelligences thesis. Mike Levin's total-- we're facing a paradigm shift in science about what we think about agency and intelligence and information and computation. And so from our perspective, we could argue about this for a long time.
But the cautionary principle is to look at this from an evolutionary perspective and to say we will co-evolve. So we better get this right because the one thing we do know from evolution is that the one who-- the one who gets subsumed, is the one who loses their ability to set their own goals.
The best canonical example that people most understand is a mitochondria. And that used to be a bacteria that was on its own, doing its own little thing, and then it got swallowed. And now it is the energy part of the cell. It doesn't get to choose its own goal. And we could end up like that with AI. That is pretty clear we could end up.
Now, how would we end up? Because we have a biological imperative to offload our cognition, it's more efficient. That's how we became more efficient. We'll take that deal if we can. What's the countervailing way of dealing with that? It's about purpose and meaning and culture. So it's our culture. It's our ability to work together. It's our sense of meaning-making. It's our need for a purpose in life. It's our need to matter. That's what allows us to either stay somewhat separate from these machines, or to at least be able to co-evolve in some symbiotic way, and potentially a symbiogenesis way where we are part of creating something completely new. What is the new thing that could happen with us in machines?
Now this, to some of your listeners, will probably sound like absolute bonkers sci-fi or weird metaphysics. But it's actually really grounded in where the current state of biology and information theory and computational theories of neuroscience actually sit. So I think this is-- we've just been-- our precautionary principle is let's get our relationship with these machines right at the start because, OK, say they are just stochastic parrots, then no harm, no foul.
But I wish someone had taken our perspective more loudly at the start of social media. Maybe we'd be in a better position with social media and not having to pull back so much damage that it's done, and to actually be living more of the benefits that we get from social media because there are benefits. It's very unfashionable to say that right now, but there are. It's terrific. I don't want all these gatekeepers everywhere that existed 20 years ago. So that's kind of our philosophy.
KIMBERLY NEVALA: And I think want to touch on some very pragmatic steps that we can think about and skip over here in a minute to the article you wrote, use the article you wrote on the AI yard sale, Dave, as a premise for doing that.
And so I think one of the problems here, too, is that a lot of folks, even in what you just said, would say, well, this seems like it's really sort of buying into this transhumanist type of an ideology. And falling prey to the fact that people are purposely designing systems to do things and to hijack our biology. And again, all of that being said, to some extent, that perception experience still just matter most.
So when you look at how we're using these systems, how we could use these systems I know that you guys are both very passionate about making sure that we just keep it human. What capabilities, what capacities matter most? What do we really need to be focusing on that will remain, or keep us human or keep us in control, as you said. So that we are, in fact, authoring, we are the authors of our experience moving forward?
HELEN EDWARDS: Well, it's a very long list. I saw someone in the AI industry publish a list of 87 things of what it means to be human. And I just like, well, what's the use of that?
[LAUGHTER]
Seriously. I'm glad you said the T word. I hate the T word. I am unabashedly, we are not transhumanists.
DAVE EDWARDS: No.
HELEN EDWARDS: Let's just say that. We are just so not that. And I can understand why it would be easy to confuse this idea with transhumanism.
I want to go back to a very, very important thing about what it means to be human. And people often don't like to hear this for obvious reasons, but we're going to die. News, folks. Newsflash, we're going to die. It's certain. We just don't know when. And that constraint of our resource that we have a finite life, that we have a finite skull, that we have to-- The finitude that we live with underpins all our decisions. What are you going to do with your next hour?
How do you decide whether to spend an hour with your kid, or an hour working out, or an hour reading a book, or an hour going for a walk, or an hour cooking, or whatever it is? How do you decide that? And then if you're going to look after your kids, how do you decide whether if they're bouncing off the walls after playing video games with you, how do you decide whether you are an amazing parent for unlocking their creativity or a terrible parent? Those are so--
KIMBERLY NEVALA: Or are you an aunt on the sidelines that goes, huh, I don't know? I mean, I’m a good aunt, so good over here.
HELEN EDWARDS: Those are so mundane questions, but they point to something really, really important about us. Now, machines don't have any of that. They don't care what those kids do. They have no resource constraints, assuming that you just switch to another GPU. There isn't this same finitude. But our finitude defines the way that we make our value-based decisions. And if we lose sight of that, then we lose sight of not only what we are going to do with our lives, but we lose sight of the basics. What are we going to do with the next hour?
And that is the first thing. I mean, we could go on for the next six hours. We know we got the other 86 things to go, but--
[LAUGHTER]
KIMBERLY NEVALA: I'm holding myself back. I'm holding myself back.
HELEN EDWARDS: But I can't stress enough how that finitude and our resource constraints. If we had the biggest-- one of the things that you can do with AI is you don't have to set a goal because you don't have the resource constraint of the end of this chat. Or the end of what it is that I can explore with an LLM. So you lose track of your goal.
That's the reason goals exist. Because we don't have infinite resources to achieve the purpose of our life. If we could just do everything all of the time, then what would be a purposeful life? What would be the reason to have a goal? The goals exist because we actually need something to measure whether or not the last hour we spent was a good hour. And that is being human. And we could lose track of that.
So I think that's the first thing, at a practical level, you're constantly thinking about how does this help me have a better sense of my sense of purpose, my goals, how I'm showing up for others, what choices I'm making? Which comes back full circle to authorship. And I don't think that's a coincidence. I think we're going back to what is kind of an obvious place.
Dave, you've got lots to say. Look at him. He's bursting to talk.
DAVE EDWARDS: No, that's not true.
[LAUGHTER]
I think when you were-- I guess, I can't remember what part of your question sort of sparked this thought line for me. But the question about what is this thing that we're working with today? And how do we think about it? I guess there's a couple of things to put in frame.
One is I think you said upfront when you talked about whether it was whether it's AI or generative AI, and what are we talking about in the thing? It's actually a really good thing to just surface because AI's been around for decades. We called it AI in the '80s when it was an expert-based system. These things have changed, and the term. And we can debate whether it's intelligent or not. But there's a label that's been put on it. I mean, you could have debated whether the personal computer was really personal when it was connected to a network, but we still called it the PC. It was an industry. It became a name. So run with the acronym that that's just what's happened.
The current instantiation is generative AI, which is a particular thing, which is what people are really focused on. I think that the question, when we're thinking about this and whether this is a good thing or a bad thing is a good thing for humans or humanity broadly. We have to stop and recognize that what we're looking at today is one instantiation of how this core technology, which today is predominantly the transformer-based large language model and language, the diffusion model, and images, whatever, how those core technologies are being offered up to us as individuals. But it doesn't have to be this way.
The way that it's being used, the way that it's being presented, what data it's using, the extractivness - ah, that was what it was in your conversation - you said something about extraction. That's going to get me going. Is it doesn't have to be that way, right? So we have a premise in what we do and how we think about the world, which is thinking about how do we shift the industry back to making products for people, not from people? Not things that extract from people but are made for them.
So when we first got into the computing industry, the guiding principle was, at least the metaphor that was motivating to me, was Steve Jobs' idea that the computer was going to be a bicycle for our mind. It was a tool that was going to allow our minds to be faster and more efficient. We're going to be able to go farther and faster with less. And it was true. I mean, the computing capability of the personal computer did amazing things to allow us to do much more than we could on our own.
Somewhere in the early mid 2000s, we shifted to much more of the taking from. So we got a lot of free stuff off the internet-- content, social media, et cetera-- but it was extracting from us by taking our attention and fracking it into little commoditized bits and selling off our intentions. So we were paying with our attention to get this free stuff from the tech industry, and that became a fundamentally extractive system.
What we're worried about now is whether that dynamic and that mindset of products will continue as we move from what we think of as the attention economy to the intimacy economy. These tools are developing an intimate understanding of us. We're telling them what we want. We're telling it what we hope for. We're trying to get it to help us figure out how to accomplish our dreams. We're putting really fundamental parts of our intimate selves. And we're telling it all about what's happening in our own health.
And all of that information is in these systems. Will we continue to extract from them? Is that what's going to happen? Is the systems are going to extract from us, to advertise to us in order for us to get the free versions? That is happening already. Or are we going to shift back to a for--
HELEN EDWARDS: Thank you, OpenAI.
DAVE EDWARDS: OpenAI and Meta, both. But then, with we can shift it back to what we think of as minds for our minds because these tools are no longer bicycles. They're not things that we put in our input system, and we click and move and pinch and zoom, and it does exactly what we want it to do. We give it some directions or ask a question, and then something happens that occurs to us. And so it is some form of a metaphorical, at least, mind. So how do we design these minds so they still work for us and for our minds?
KIMBERLY NEVALA: Yeah. And it was interesting because in your article, and we'll link to it, so we don't need to go into the specifics of what you called the AI yard sale. The hypothesis is there's a category error in how people are thinking about objectives and value for agentic AI. Which is essentially like, we're going to use agentic AI and replace all of our software systems in this and that.
And then as I was thinking about it, I was like, it's like a little microcosm of, to some extent, the premise people seem to have in their minds when they're thinking about how do we just design AI systems to do all the things for all the people, all the time, right? And the article said - I wrote it down so I didn't get it wrong - was that we need to switch the design premise and make sure that we're not designing AI systems that disappear so they're just out there doing all the things. As you said before, we have a lot of humans. Why are we trying to do this? But make them more consciously collaborative.
So talk to me a little bit about what that looks like. And then how why might we think about that, whether we're leaders, educators, others today? And put that into practice in the moment we're in now.
DAVE EDWARDS: Yeah. So just quickly, on the yard sale concept, this came up in an article that I was writing in response to the concept of the SaaSapocalypse, which is a really cool word, and I still need to figure out who coined it because it was-- good job.
So the idea was here is that a whole bunch of things were coming out from particularly Anthropic and Co-work. And a whole bunch of other things that were making it much more easy, and accessible, and showed the power of these systems to go off and do things, right? So we've had this concept of the AI agents, which we've written about for a couple of years now, and the ability for them to go do things for you.
The premise then became investors took hold of this and thought, uh-oh, this means that nobody's going to need to have structured software anymore. You're just going to dial up your agent and say, hey, go tell me which customer I should call next. And so, therefore I don't need Salesforce anymore, right? I don't need any of these SaaS systems. So it became the premise in Wall Street was that enterprises are going to have a yard sale and throw out all their SaaS systems and use agents, and all the SaaS stocks collapsed. And it was an epic change. And I say that, having spent years as an equity research analyst a long time ago, but it's a world I kind of get, still.
So I basically said that it doesn't make any sense. First of all because AI systems are only useful based on the context they have. So one of the reasons why these large language models are trained on such an immense amount of data is that it kind of is this grand catch-all of hopefully there's some context in all the training data that it can access for you to help you do the thing that you need to do. That's one way of doing it.
It's also really useful, though, if that model can actually access the information that you really want. Sometimes, you're writing that all in a prompt. You're giving it a document, or it's giving access to something in your company to say, OK, well, here's all my customers. Who should I call? The problem is, there's a lot more than just here's a list of customers, and here's how much they paid us. There's all of this context about how the human system is operated. Did you call? Did somebody else who sits next to you call them? What did they say? What was the reaction? What was moving around? What else was happening in the world?
All of this social fabric of humanity is what's so important to know how to figure out what to go do next. And the SaaS systems that actually include a good amount of it, not all of it, but that's what they've been designed to do, is to try and capture all the moves. Who got hired? Who didn't? Why? Those kinds of things. Who got promoted? Who didn't? Why? All of that is somewhere embedded in those systems. So maybe hold on to that.
I think that the broader question here is how do we think about these systems operating in our combined system? And you ask the question of whether we're going to have these AIs that disappear and go off and do things for you? And that's kind of cool. People like it. There's this premise that there's going to be $1 billion company run by one person because they have AIs that go off and do things.
Some of that is sort of intriguing about the capability. Some of it is really scary that you'd have some AI going off and doing things. You have no idea where it is or what it's doing, and whether it's actually being accurate or doing what you really want, or if it understood you or not.
But a lot of it is just to me, is a bit harebrained that you'd actually want to have something that goes off and does something you don't know. How often do you delegate a task in your work environment to someone and say, don't tell me anything about it, but just come back and tell me the answer? You're like, no, I'd like you to come check in with me. I'd like to talk about it. I want to be involved in that decision process. I need to understand. I want you to make sure that you're telling the people you need to. We live in a complex system, and that is one of the most important parts of humanity.
So I think about it is that I think agents and the ability for AI to do things is really important. But I think we have to stop thinking about AI as the product itself and an AI agent as the product itself. And start considering that we're putting AI and agents within a product that's more like an institution. Meaning it could be an educational institution. So humans and AI are both in this product together learning. Some are teaching. Some are learning. Who knows which direction that's going? I don't know. There's various ways that you might do that. It could be an institution for governing an organization. It could be something else.
But it's thinking about the AI as a participant in the product rather than the product itself. And that's a kind of a leap. It's definitely not something that the tech industry thinks about a lot in that technology companies really don't like collaboration. It's actually really hard because you're trying to create something where you're not sure what people are going to do because people are messy and complicated, and that's why we love being humans is messiness and weirdness is what makes us interesting. But it's really hard to code for. It's hard to structure.
But that's our proposition is that if we're going to bring these things in, we have to allow it, put it in a place where it can be messy, where you don't know where the outcome is. You have to be able to design for the unknown, which is, by the way, just a slight thing. That's why the theme of our annual summit this year in October is unknowing. Is the idea of how do you design for the unknowing? How do you design in a way where you don't know what's going to happen to the world broadly? But also what's going to happen in the next step of the conversation that you're having with AI?
KIMBERLY NEVALA: Awesome. Well, we could go on for a very long time because, as I've expressed probably during this recording, I've been so enraptured by all the things we could talk about that I've forgotten what I've actually said. One of the things that I love most, weirdly, about your work, I guess, is that there's so many things that I just really, at a gut level, just so fundamentally agree with, and so many things that I go, no! And I love that.
DAVE EDWARDS: Isn't that great, though? Isn't that what's great about being humans, we can disagree?
KIMBERLY NEVALA: Yeah. It makes me so happy. Well, because I think that's the place where learning happens. But that being said, I am going to resist the urge to take us into our second hour here and turn it over to you for some final words. What would you like to leave with the audience as we continue to engage in this ever-evolving, interesting world?
HELEN EDWARDS: I think the thing that's top of mind for me right now is how anxious people are about this. And there's so many reasons. And the anxiety is quite well-founded because at the very least, the narrative from the big AI companies is pretty freaking scary, honestly. But I don't think it's right.
I think the world is much more complex. I think humans are more complex than this. I do think that we face a perilous moment right now of what could become quite a significant lock-in. So we are very motivated to have a much different conversation and to have more people having this conversation about the nuance, about the open space, about the possibilities, about the things that we need to work on to make this work for humans. The default position is actually not a very good one.
So people say we're building a movement. I'm kind of like, oh, what, what? I've not really thought of myself as that way. But our community is growing. There are more people who want to have these conversations then want to have conversations about UBI on the beach, and so join us.
DAVE EDWARDS: I'll just say my one message is to each other. This is really difficult. This is the biggest transition, I think, we'll go through in our lifetimes. And so there is no one right answer. But if we can remain kind with each other and respect each other's places, we're probably in a better place.
We do organize people around the concept of hope, which is a rebellious word, hope, in my world. I think about it. Look it up, the definition by William Sloan Coffin, where hope rejects the status quo and is looking for something different. And when I say it doesn't have to be that way, that's coming from that position of hope.
And we invite people to join us to help figure out how we can do this differently. If you don't like the way it is, we're probably a pretty good group to spend some time with because we're trying to change that. So just can find us: artificialityinstitute.org. Click on journal to read all of our stuff, the things that you've read, the articles, the books. Click on Community and type in a little form, because that's the way you get in, to join us in a digital community that we host on this product called Circle, where you can communicate with us and others.
And we love joining up with people. So find us on LinkedIn. Find us on social. Helen's got a great series of videos coming out on Instagram and TikTok and YouTube and so forth. But we love to gather people to talk about this and to share ideas and have our ideas challenged.
KIMBERLY NEVALA: That's awesome. And I'm going to hope that everyone will join your mission to ensure that we, in fact, embrace and celebrate being human in the age of AI. So with that, I will thank you for your insights and time.
DAVE EDWARDS: Thank you.
HELEN EDWARDS: Thank you.
KIMBERLY NEVALA: And now to continue learning from thinkers, doers, and advocates such as Helen and Dave, you can find us wherever you listen to podcasts and also on YouTube.