Playing With Reality

The power of generative models of Artificial intelligence like ChatGPT and Dall-E dominate the headlines, for good, and of course for bad, these days. They bring promise, yes, but also a kind of fear. So could they be a Frankenstein’s monster of a technology? Or something that could save the planet? And where did it all come from, anyway? In this, our first episode of a new season of Playing with Reality focusing on Artificial Intelligence, we take a look at the fascinating history of AI to discover more about how it went from a niche theoretical field to the world defining technology it is today, exploring its possibilities and potential dangers.

Host Menno Van Doorn also introduces his new co-host Tijana Nikolic, a Sogeti AI expert, who will be joining him all season to comment on the interviews and discuss the fascinating themes.

Today’s Guests

Nell Watson
Eleanor ‘Nell’ Watson is a trailblazer in emerging technologies such as machine vision and A.I. ethics, and dedicates her work to protecting human rights and infusing ethics, safety, and values into technologies like Artificial Intelligence. She is the Chair & Vice-Chair of the IEEE’s ECPAIS Transparency Experts Focus Group, and P7001 Transparency of Autonomous Systems committee on A.I. Ethics & Safety, where she helps to safeguard algorithmic trust in AI. 
https://www.nellwatson.com/


David Weinberger
David Weinberger is an author, technologist, and speaker. Originally trained as a philosopher, David’s work focuses on how technology, particularly the internet and machine learning, is changing our ideas. Since 2004 he has been a Fellow at Harvard’s Berkman Klein Center for Internet & Society. His books include Too Big to Know and Everyday Chaos: Technology, Complexity, and How We’re Thriving in a New World of Possibility.
https://www.linkedin.com/in/davidweinberger-writer/ 


Sogeti work closely with clients and partners to take full advantage of the opportunities of technology. Find out more about us here: https://www.sogeti.com/


What is Playing With Reality?

We all know, the online world today is not as it once was.

It is filled with avatars, deep fakes, people playing with their own realities and all kinds of crazy stuff.

From travelling back in time to exploring the depths of the Metaverse, Playing With Reality takes a look at technology as it happens and challenges its boundaries. Join our host Menno Van Doorn, Research Director at Sogeti, as he takes a deep dive into technology with experts from the industry. If you are fascinated by the possibilities of the Metaverse and emerging technology, this podcast is for you.

Welcome to Playing with Reality, a new podcast by Sogeti, Part of the Capgemini Group.

Speakers: Menno van Doorn, Tijana Nikolic, Nell Watson & David Weinberger
[Music Playing]
Menno: Unless you've been living under a rock, you'll have heard about the power of artificial intelligence by now. Generative models like ChatGPT and Dall-E, they dominate the headlines for good.
David: We talk about a Copernican Revolution; it could be on that scale.
Menno: And of course, for bad.
Nell: And there will be a moral panic about AI.
Menno: Ai. This week I'm speaking to Nell Watson and David Weinberger, two technologists who have been thinking about these issues for decades.
So, welcome to the second season of Playing with Reality, with me, Menno van Doorn, a podcast from Sogeti, the home for technology talent.
So, here we are with a new season, and we have a surprise in store as well. I'm going to be joined each week by a very special guest to discuss it all and reflect on the interviews, Tijana or Tia Nikolic. Hi Tia.
Tijana: Hey, Menno. Thank you for having me again.
Menno: Well, you've been a guest in last season, haven't you?
Tijana: Exactly, exactly.
Menno: So, you're now going to be my sidekick or whatever, the expert on AI. So, maybe you can do a little introduction of yourself.
Tijana: Of course, of course. So, my name is Tijana Nikolic, as you said. I'm an AI specialist in the Sogeti Netherlands Center of Excellence Team for data science. And I specialise in testing of AI and using AI to accelerate software testing.
So basically, anything regarding validation of AI, I'm very, very passionate about. And that's, I think, a light theme of a lot of our episodes. So, that's a bit about me.
Menno: In this first episode of our new season, we'll be delving into the fascinating history of artificial intelligence.
From its early conceptual origins to the groundbreaking research and development that has brought us to where we are today, AI has a rich and complex history that has been shaped by some of the greatest minds of our time.
We are going to explore the key moments, the breakthroughs, and the controversies that have defined the evolution of AI and discover how this revolutionary technology has transformed the way we live, work, and interact with the world around us.
Now Tia, did you notice anything interesting about my opening introduction just then?
Tijana: Menno, everything was interesting about the introduction.
Menno: Was it?
Tijana: But what do you mean exactly? I feel there's a surprise in store for me here.
Menno: I'm not sure whether it's a surprise, but it was actually written not by my producer or by me, myself, but it was written by ChatGPT. But could you notice that this was ChatGPT?
Tijana: Actually, no.
Menno: No.
Tijana: It was flowing very nicely. And it might be because of the speaker. But also, it might be because of the model. It sounds really like a human wrote it.
Menno: Yeah. So, we can get rid of the producer.
Tijana: Yeah.
Menno: Yeah. Sorry. So Tia, everyone is talking about ChatGPT, but let's take a step back and just, can you tell me what's actually AI?
Tijana: So, artificial intelligence or AI is very simply put just intelligence exhibited by machines or programs, for example. And basically, what AI studies as a computer field is intelligent agents.
So, some models or different programs that can take a look at an environment and derive a specific conclusion, or for example behave in a certain way that would be successful. That's basically what the idea around artificial intelligence is.
Menno: Now let's start with ChatGPT. Why is it so significant, this change in AI systems?
Tijana: Yeah, so the significant impact of ChatGPT, for me personally, I think it's also the accessibility of it.
Menno: Yeah.
Tijana: So, it's so easy to use, everyone can do it. You really do feel like you are speaking to a human. And I think that's why it's so significant, because it goes towards that general AI and it can do different tasks and it can converse in new and specific ways.
Menno: So, it's easy. And it's also the tone of voice, don't you think?
Tijana: Absolutely.
Menno: It's so natural. It's so human. It's so normal, I would say.
Tijana: Yeah.
Menno: So, does it signal a new era for AI? And does it signal a new era for humanity?
Tijana: Yes, it does, because now we're going to have a lot of conversations, and I definitely think they should be held around how do we use this in an ethical way? How do we use this to help people and not hinder society?
Menno: Great. So, now let's go into our first interview. First up, I wanted to discover more about where we stand with AI today. So, I got in touch with an old friend of mine, Nell Watson.
So Nell, is a futurist and a technology speaker who is chair of the IEEE Transparency Experts Focus Group, where she works to engineer mechanisms into AI in order to help safeguard algorithmic trust. Let's hear from Nell.
[Music Playing]
Menno: First of all, what a pleasure being able to talk to you, and what a timing, I would say. The whole world is going berserk about everything that's happening in AI, and I have the opportunity to talk with you about that same thing. So, great to reconnect to you.
Nell: Thank you.
Menno: And being a trailblazer in AI in ethics, maybe the most important question is what are the big questions that we should ask now that this is all out in the open?
Nell: Well, it's clear that we're in a new era because the last 10 years or so has been the era of deep learning, which is about doing a small, tightly focused task very, very well, right?
Menno: Mm-hmm (affirmative).
Nell: So, making a prediction about something based on looking at a large amount of data, for example, but that wasn't generalizable. You couldn't use that classifier for motorbikes to work on trucks instead.
Now we're in that kind of era where we've moved from niche narrow applications towards systems which can think in an abstract manner to understand many different forms of data, sort of multi-modality, to understand text, audio, video, images, et cetera.
And not just text, but poetry and financial reports and sports results and everything together, to create a much more holistic understanding of the world. And moreover, these models enable us to distil knowledge.
Basically, the more data that we put in, the more of a distilled condensed version of that can be created within the model. And we can simply ask it questions or give it prompts, for example, suggestions.
And using those kind of like how a magician might use spells, we can elicit responses from it. Because these kinds of models are not just one intelligence. They are tens of thousands of different little intelligences, but they're all fused together. They are legion, they create collective consciousness.
And in fact, it's not too dissimilar from how our own brains work, at least on a sort of smaller level.
Menno: What are the consequences?
So, there's this new, you call it different little kinds of intelligence being working together. But for instance, what will it mean for culture? What will it mean for knowledge generation? What will it mean for my life and your life, for the people that are listening?
Nell: It's going to be powerfully disruptive in many ways. Firstly, nobody expected the world of the creative sector to be impacted by AI. There's such a thing as Moravec’s paradox, which says that things which human beings find easy, like walking across a room and opening up a can of soda pop, machines like robots, for example, embodied intelligences find very difficult.
Conversely, doing very complex mathematics, is something that machines can do trivially. And so, we thought that AI would help us make sense of numbers or make sense of complex phenomena, which it certainly does, but we didn't expect the application of that to have such an impact in, for example, creating images.
Now we're moving into creating videos, creating 3D objects and animations and scenes, basically being able to generate all kinds of content that the creative sector normally has been making for us.
So, that's going to have an impact on people's economic choices, people's economic opportunities. But beyond creativity, in terms of media, we have tremendous opportunity to improve the generation of new ideas.
And so, we're starting to see these examples where the kind of a dialectic between human intelligence and machine intelligence in the space between those two nodes, though something magical is being created. And it's enabling us to move past our assumptions about how a problem should be solved and to generate very creative and innovative solutions based on that.
And so, I think that machines are going to be very instrumental in how we approach science and how we approach various kinds of difficult or even seemingly intractable problems.
Menno: And is that, your point about ChatGPT, now you have something you can just talk with. So, this is the interface between man and machine is just a chat. Is that the magic? So, what do you make of ChatGPT?
Nell: I think the interface definitely is part of the magic, for sure. Again, to look at the analogy of that magical moment, in the early to mid-1990s, the internet has been around since the 1960s or so in various forms, but you had to manually connect to one server at a time and download things, and it was very complicated.
Then along came the worldwide web, which created this wonderful interface where you could just surf between content and servers, et cetera, in a very effective and easy way.
And that enabled so many new people to use these systems and to become more efficient. And that has become embedded within our daily lives, of course. Many of us can hardly imagine trying to do work without access to a computer or access to the web.
And similarly, this new interface, this new layer that we can use is going to change how we approach work, how we approach leisure, and perhaps even how we think about ourselves in the world today, and how we relate to society at large.
[Music Playing]
Menno: So, what Nell was saying about man and machine working together was really interesting. What I think is that they will outperform us before we know it, or they already are. Do you think standalone AI can be the future? Or do you think humans and machines will always be working together?
Tijana: I understand your point of view, Menno, and I just want to say here that for me personally, standalone AI will never be more powerful than human-centred AI, as I call it.
And this is because having people enabled by technology, having their day-to-day life enabled in a positive way by technology is always going to be more impactful and more adopted as well. AI is going to be more adopted if it helps humans, as opposed to, it being standalone and trying to outperform humans in a specific way.
Menno: I'm so happy you're so confident. I'm in the space of predicting the future, and I know you can't predict the future. So, I'm with you. I think these things, we should do it together, but I'm not convinced about whether this is a preferred future or that this will be the real future. But let's see.
So, about the creative sector, do you believe AI and things like ChatGPT will inevitably disrupt lots of the creative sector, for instance, or maybe put it differently, which will be the first to go?
Tijana: We see a lot of discourse with artificial intelligence or generative AI being used in the creative sector. And this is due to the models being trained on the data off the internet, as I previously mentioned.
So, that means that even inadvertently, or by mistake, these models can be trained on data that's copyrighted, for example. And this is definitely something that is currently disrupting the artists in visual art, especially because their style is fundamental property.
Menno: So, copyright comes to the rescue, sort of helping the creative industry-
Tijana: Yes.
Menno: Or the creative people against the machines, but also, I think the quality of the creations can be improved by using the tools, don't you think so?
Tijana: Absolutely. So, this is what's disrupting, especially visual artists, as I said. But if we take a look at writers, writers are not that mad about this because they use this to help them in their creative blocks, for example.
Menno: It’s funny, yeah.
Tijana: And this is a great point Menno, that you raised, also for visual artists, this can be a tool to help create art. And if we just take a look at generated art, I think with a naked human eye, we can already see like, “Oh yeah, this is AI generated.” You can actually see that the style is different, so-
Menno: Yeah, a lot of it is sketch, to be honest.
Tijana: Yeah, it actually can be. Yeah, exactly.
Menno: Okay. Maybe, one time we'll talk about the inventors, another part of the creative industry. But let's see how our other guest is thinking about these kind of things.
Tijana: So, who are you speaking to next, Menno?
Menno: Well, next I spoke to another renowned technologist, David Weinberger. And David, he is an author who has spoken widely about AI and machine learning, and he is also trained as a philosopher. That makes it interesting.
He's bringing a broad perspective on how technology, particularly the internet and machine learning is changing our world and how we think about our relationship with machines.
So, I wanted David to turn the clock back a bit to find out some more about AI's history.
[Music Playing]
So, hello David. Happy to have you.
David: Hello.
Menno: Yeah.
David: Happy to be here. Good to see you again.
Menno: So, all kinds of people have real questions about what's going on with AI. You just need to turn on the television and see teachers talking about kids using ChatGPT, all these kind of things.
What do you make of this enthusiasm that everybody now has about these capabilities? Do you think this is as big as the introduction of the internet? Or what do you make of it all?
David: I think it's very, very important. I think it's considerably epically important in the sense that we talk about a Copernican Revolution. It could be on that scale. It's very hard to do a comparison to compare the impact of the internet and AI, only because they're so different in what they are and how they affect us.
The internet has been also, one might say, an ethical change. Internet changed just about every institution that it touched, which is just about every institution. And has direct effect on power relationships, and authority and every aspect of day-to-day life and how businesses work, and the difference, how they treat their customers, and the ability for customers to connect.
And so, all social dimensions, it's such a broad and also a really deep change. When you're changing what it means to be a friend from what it has been for thousands of years to most of my friends are people I've never met, at this point. It's a change in everything.
You compare that to machine learning, which is also starting to touch many, many, many aspects of our lives because this is such a powerful computing technology. But it also more directly challenges our ideas about who we are than our ideas about the institutions and social formations that we're in.
Menno: So, the whole enthusiasm today about what AI can do is quite fresh, I would say. So, when do you think was the starting point? Can you mention a turning point or help us to give maybe a historical perspective of how we got so far? So, where does it come from?
David: Yes. AI as a term has been around and loosely defined for a long time. And so, the history of it is really sort of squishy.
From my point of view, I would look to the mid-1990s.
So, in the mid-1990s, IBM did this crazy thing, which was to take Hansard, which is the Canadian bilingual record of everything said in parliament. So, you have this side-by-side translation going back dozens of years. And it's a lot of stuff.
And rather than trying to get a translation system to work by telling that system everything that it needs to know about the grammar and syntax and vocabulary of French and English, IBM simply gave a machine a copy of Hansards and said, note the positions of words, French and English, French and English.
And of course, it doesn't work that it's exact positioning, but you could see this is a sentence in French. This is the same sentence in English. Now look at a million examples and try to figure out how to translate one into another when the words are being used in some other context, which seems crazy, but actually worked better than anything that had been done.
The next big step is pretty much the same insight but applied to images. And the famous sort of thing you point to is a 2012 competition in which computer scientists were given the challenge of seeing how many images of objects their system could identify.
And people before this would try the same sort of thing as before IBM, you would do with language. AlexNet, the winning one, which won by a substantial jump in accuracy said, no, let's not tell it anything except we're just going to give all of the images in the database, they're all labelled with the object that's in them.
And we will just say, go ahead, you in a neural network, you figure out what are the salient differences in the position and colour of the pixels that let you distinguish an elephant from a rhino and the other stuff.
And the way that it does it with pixels is very much the same as it does with language or anything else or medical stuff. It looks for patterns in those pixels, and it keeps iterating and iterating and iterating and it finds patterns that may be so complex and in combination that we can't figure out exactly why that worked, but it did work, and we got this sort of quantum leap forward.
Menno: So, you were saying that 2012, for that reason, the thing that you can do with images and recognition is a turning point, or maybe the turning point in-
David: I'll say a turning point because everything's more complex.
Menno: A turning point. Okay.
[Music Playing]
So Tia, I don't know how old you are, but 2012, do you remember?
Tijana: I do remember it. It's actually the first year of college for me.
Menno: Oh, okay. So, it was your turning point, maybe.
Tijana: Yes.
Menno: But do you think it was a turning point in AI also?
Tijana: Definitely, yes. 2012, was a time where, ImageNet competition was happening. And actually, one of our favourite professors that we all listen to on Coursera, Andrew Ng, and a Google engineer by the name of Jeff Dean, hatched this idea of building a large neural network, with massive computer power using Google servers.
And they fed a massive amount of images to that model. And they actually used an unsupervised learning model, as I again mentioned in the beginning. And this model was trained on images from YouTube. So, just snapshots from videos.
And it actually took a look at all of those images itself and created clusters. One of them was a human cluster, a human body, something like that. And another one for cats.
And I say that this is the year where the cats became the queens of artificial intelligence, because we use cat images all of the time to do demos for our clients to show how computer vision works. So, that's how I like to call it.
Menno: And he mentioned the turning point. He went back to IBM. So, where's IBM now?
Tijana: IBM actually, again going back to Nell and myself and what we like to work on and what really is our passion, IBM is paving the way and leading the way of ethical AI.
So, what I really like to do is constantly check their website and see if they have any new packages, modules put out for the ethical AI. And they also actually talk about human-centred AI a lot.
So, they focus on different things. This is really where they are currently, in my opinion, doing a lot of great things and I really respect it. Yeah.
Menno: So, they are our guardians, maybe, the IBM.
Tijana: Yes.
Menno: Have you seen any significant other moments like 2012 recently?
Tijana: Yes, of course. There's a lot of these significant moments. One of them, of course, a couple of years back is actually GPT coming out. So, that was like 2019, where we saw these huge, large language models coming out.
And also, Dall-E, generative art happening in 2019. So, that was not so long ago, if we look at it. And you see how big of a boom these models made in the past couple of years, it's really impressive.
Menno: Maybe there are too many, there are so many. It's like in a candy store, you can pick so many new companies and things happening.
Next, I went back to Nell to find out some more about the current chasm crossing moments and how humans and machines are becoming ever more entwined.
[Music Playing]
So, where do we stand now in our understanding of the intelligence of artificial intelligence? I think a lot of people were taken by surprise. The broader audience is not so informed or specialist in AI, but you could call it a chasm crossing moment where-
Nell: Yes.
Menno: For definite. Can we expect more of them to come?
Nell: We are certainly going to see many, many more of similar kinds of models, and they will be applied to just about every problem you can imagine under the sun.
This is going to unlock incredible new waves of content, which is synthesised and mediated by machines, kind of a hollow deck, if you will, but at least in a virtual sense of being able to design a scene or a sequence of scenes in even 3D spaces, which seem very realistic, which feel more like having a theatre with characters in it.
We're going to see non-player characters in games that we can converse with, not just on pre recorded lines, but dynamically generated virtual lines. And this is going to create very rich new narrative kinds of content.
But of course, one of the biggest problems that we're starting to see emerge, and it will become a big deal in the years ahead, is the risk of super normal stimuli as a result of these new creative capabilities.
A super normal stimulus is something which is beyond natural, but it's something that we find irresistible. For example, they had a species of beetle out in the Australian outback, which was mysteriously dying off, and they didn't know why that was happening.
And they found that it was due to pollution, but not chemicals, but rather these little stubby beer bottles that people would drink, and they'd throw in the bush. And to the beetle this shiny brown surface was basically the ultimate sexy beetle butt.
And so, they were humping these bottles preferentially to actual beetles, and that was having a catastrophic effect on the population of that species.
We have a similar problem today where we have so much content, it's so accessible, and yet now we're moving into a world where machines can generate this content endlessly.
Menno: Are you saying that the machines are creating the beer bottles? Is that what you're saying? And then that-
Nell: Yes, exactly. Because we are about to become so enamoured, so bewitched by AI, by our relationships with AI, romantic or platonic, which are in many ways going to be more fulfilling and more exciting than real human relationships that this sort of problem of hikikomori and people kind of retreating into their rooms is going to become much, much more problematic.
Menno: And that's your work. So, you're working on those principles in IEEE. You are a front runner, and I admire you for that. How do we get from principles and rules and good thinking, what you should and shouldn't do into action?
Nell: Well, we can boil it down into different elements which either drive or inhibit the emergence of equality, equality in the sense of transparency, for example.
Menno: Okay.
Nell: So, publishing how a system functions in a way that people can easily understand, maybe like a comic book, will tend to improve transparency in a system.
Whereas concerns about intellectual property, that will tend to inhibit the quality of transparency.
And all of these things connect in a kind of a mesh. And when you laboriously map them out, you can understand the full picture and create rules and evidence which would satisfy that all of those respective elements are being taken care of.
And from that, we can then create these standards and certifications.
Menno: And by rules you mean maybe also laws about transparency. Do we need more laws about being transparent on these kind of things?
Nell: Definitely. And I think that, as I said, there's this sort of Sputnik moment occurring, which is only going to grow all the more, and there will be a moral panic about AI.
And typically, regulators have a big heavy hammer that they're not very accurate with whenever they are faced with a moral panic and an outcry from the public.
So, the more that the industry can adopt these kinds of rules at an early stage, they can de-risk themselves and hopefully avoid a regulatory hammer, which is unfortunate, which maybe leaves loopholes or targets the wrong populations, et cetera.
Menno: Well, that’s a hopeful end of our talk, but I do have one last question because I'm intrigued by your remark that having a crush on AI will be the ultimate … in 10 years you see me, and I have fallen in love with an AI system. How did that happen?
Nell: It could happen to any one of us. And it seems ridiculous now, but technology has a way of changing our values in ways that we don't understand at the time.
I think, for example, of hormonal birth control and the ramifications that has had on our society. On women entering the workforce, on feminism, on all of it, because of that technology.
Similarly, many of us couldn't imagine putting our real name on Facebook or jumping into a stranger's car through an Uber app. And now of course, this is something that most of us are pretty comfortable with.
So, I think that we're already starting to see systems such as Replica offering a kind of a relationship, which is potentially like a spouse or a boyfriend or girlfriend. This is the world today. In 10 years or less as things get more sophisticated, we're going to become incredibly enticed by these systems.
Menno: And I'm not embarrassed by it. So, it's socially accepted.
Nell: It will be.
Menno: It will be.
Nell: It will be. The same way that the LGBT kind of culture has become more ingrained in general society. We're starting to see the acceptance of polyamory and elements, such as that.
So, I think similar, we're going to see a lot of people who say, “I'm robo-sexual. I love my AI and I'm not ashamed of it.”
But whether that's ultimately to the benefit of humanity and human society is another question. And I fear it may not be.
[Music Playing]
Menno: Having a crush on AI. So, what do you make of these fears in love relationships between men and artificial intelligence to you?
Tijana: Yeah, so it really reminds me of the movie Her, where the main protagonist falls in love with their AI virtual assistant. And in the end, she or they, they leave him, and he's left all alone. And yeah. But jokes aside.
Menno: Wow, I don't think it's a joke. I've been meeting a lot of security people lately, and they talk about people falling in love with a non-existing person or a person that exists, but you never meet them, but they want your money. So, I don't think it's as weird as we think it is. It's happening every day.
So, the moral panic idea. So, what do you think that will happen? Are you thinking about possibility that the general public will panic about these kind of things?
Tijana: Yeah. In the future, if it gets out of hand, and again, tying back to what we already discussed in the beginning, if we don't make sure that AI is adopted in a human-centred ethical way, this moral panic is something that's definitely going to happen, because you can see it even in current day, people are really fed up with their data being used by AI.
For example, whenever you go on Instagram, you get an ad recommendation. That's a machine learning model. These are big things that actually are shaping the way we act on online and the way we interact with technology, so-
Menno: So, a lot of emotion, I think being fed up is not exactly the same as panic, but-
Tijana: No, no, no.
Menno: We have emotional response. And as you said, we should mitigate these kind of risks by looking at ethics, of course.
Tijana: Yes. Because I feel that, this again, hinders the adoption of AI, even if we have people being fed up and not wanting to share their data or acting in a certain way, or maybe because they don't understand what their data is being used for.
AI ethics can really help us with this, as you said, Menno. And to be really concrete, there's different guidelines by the European Union and of course, worldwide that are protecting users from unfair use of their data. GDPR, for example, that's the one that we all know about.
But there's also a guideline coming out, or actually it came out a couple of years ago, but this year it's going to be revisited again. It's about assessing the riskiness of our AI models in different industries, for example.
And based on that riskiness a specific way of auditing and different technical packages are going to be used to ensure transparency and to inform the users of how their data is being used, how they should use an AI model, and exactly what's happening under the hood. So, this is really something that's going to help preempt this.
Menno: So, at this moment, I hear the voice of Nell saying to me, “Menno, this is exactly the thing that I'm doing now with IEEE. And by the way, Thijs Pepping, my colleague, or our colleague has joined that initiative.
So, I think we will see from IEEE and many others, a lot of tools and guidelines to give answers to the direction that we want to take with AI. And I think that's a good thing.
So, to bring things to a close, let's go back to David to hear some final thoughts on the fears around AI and some of the other ways in which we might mitigate against it.
[Music Playing]
Menno: I would say, just for the sake of the discussion, that ChatGPT is a turning point for another reason, which is the massive use of it.
So, what I'm now seeing is the adoption curve going crazy. My father is 87-years-old and he's playing with ChatGPT. He is doing a course on artificial intelligence. And it's about hope and fear of this future because people, when we talk about this topic, and you talk about winter, summer, but a lot of people are scared of the whole thing.
And it reminded me of a piece of research that we did. It was called the Frankenstein Factor, where we got back to a publication done by Sigmund Freud in 1919. It was called Das Unheimliche …
David: The Uncanny.
Menno: Yes, The Uncanny, yeah. And he described our relationship with, let's call it a humanoid thing. So, anthropomorphic stuff like mechanical dolls. And so, we didn't try to answer the question, should we be worried about it? But more the question, why are we fearing it? So, why do you think, why would people fear AI?
David: I think there are lots of reasons to fear AI and we have a short cultural history of using AI as a goblin terminator, right?
Menno: Yeah.
David: The robots, if they become conscious, they won't need us and will kill us. That's not my main worry. I have to tell you.
So, the ChatGPT stuff, I'm going to call Chat AI, because it's already spreading beyond that particular instance.
Menno: I like this word.
David: Is a user interface to a insanely, insanely powerful machine learning system, machine learning model. How could you not want to try it out? And once you try it out, how could you not want to test its limits? Because it is designed to not only sound like a human, but to engage with humans in ways that humans care about.
Because it is based upon language. And language is a representation, it's not even a representation. It's the means by which we engage with one another about the things that we care about.
And if you can build a model on that, you're not just building it on phone names or word roots or anything like that, you're building it upon what we humans have engaged with one another about, which is a direct representation or sign of what we care about, what matters to us.
And this system and other large language models as well do that. So, let me be really quick about how this thing is built as I understand it. You take all of these words, you dissolve the sources, which I'm going to come back to in a second because it's one of the things we need to fear.
You dissolve the sources. You just take these words in their literal proximity to other words. And you develop this model that has billions of words and something like a trillion relationships among them.
And so, this model brings words in it. It doesn't actually keep the words, it doesn't have a word for home in it, what it has is it replaces that word wherever it teases it with a meaningless token, some symbolic thing because it doesn't want it to have anything to do with meanings. It could be pieces of confetti, it doesn't matter. It could be health records, it's just numbers.
But it turns out that with this massive amount of examples and the massive computing, it will figure out what sorts of words or tokens, not even words, what sorts of words humans will use next to other words in order to satisfy the prompt that the human typed in. That's all that it does.
It has no meanings. It literally, literally, literally knows absolutely nothing except the statistical correlation of words.
And one of the sobering things is that we think that we're very intelligent and original species and very creative. Well, it turns out there's enough of this that’s it's a little sobering to say that no, a machine can do it if you give it enough examples. So, that's a little dispiriting.
Menno: Yeah, well that's sobering. I like that because we are looking in the mirror of our own almost say stupidity.
David: Well, yeah. Well, it's sobering. This is all part of the Copernican Revolution. For me, the fundamental thing is we speak languages because we care about things, we care about ourselves, but we also care about others. We're connected to others.
Machine learning doesn't care about anything in case of language. It's just this giant array of connected words. And it doesn't care. It reflects what humans are interested in, but it has no act.
And for me, it can be dangerous to rely for things that we care about on a system that has no system of caring. I think it's incredibly positive. And I think it is game changing or Copernican or revolutionary, what this language breakthrough has done.
At this point. It is just an extraordinarily talented bowl of soup. It's sobering to think that it can do so much. And it's tempting to think that we do it the same way. But that's, I think, one of the profound questions that I don't have an answer to. By the way, I don't have any thoughts about the extent to which we work the way that it does.
Menno: I think you summarised it pretty well. It all boils down to a bowl of soup. That's it. So, thank you so much, David. Thank you for your interesting perspectives.
[Music Playing]
David: Thank you so much. It's great to talk with you.
Menno: That's all for today. Thanks so much to you for listening and thanks to Nell, David, and of course Tia for helping me open up this fascinating topic.
Tijana: Thank you, Menno. If you enjoyed this episode and want to let us know, please do get in touch on LinkedIn, Twitter, or Instagram. You can find us at Sogeti. And don't forget to subscribe and review Playing with Reality on your favourite podcast app as it really helps others find our show.
Menno: In two weeks, we'll be focusing on how generative models of AI are changing the game in the space of healthcare and beyond. Do join us again next time on Playing with Reality.