Steering AI

How can AI help mitigate the climate crisis? What are the advantages of simple models such as Gaussian Processes? And what will AI mean for the future of work? In this episode, we talk with Professor Marc Deisenroth, DeepMind Chair of Machine Learning and AI at UCL, and Deputy Director of the UCL Centre for Artificial Intelligence, where he leads the Statistical Machine Learning Group. His research interests centre around data-efficient machine learning, probabilistic modelling and autonomous decision-making with applications in climate/weather science and robotics.

 Date of episode recording: 2023-10-10T00:00:00Z
 Duration: 01:01:21
 Language of episode: English
 Presenter: Reuben Adams
 Guests: Professor Marc Deisenroth
 Producer: Reuben Adams
 

What is Steering AI?

Artificial Intelligence is hurtling us into an unknown future. Will it pollute our infosphere, reinforce biases, or even be an existential risk? Or will AI help us solve the energy crisis, revolutionise healthcare and even eliminate the need for work? Perhaps all of these? On Steering AI, we talk to leading academic experts at the cutting-edge of this increasingly powerful and pervasive technology, hearing their views on the benefits and how to steer around the risks. The first step to mitigate a threat is to understand it. Technological progress is hard to predict, but no one doubts AI will have an enormous impact on our lives. Brainstorming with the experts, we seek to understand the problems we’re already facing, and what might be coming our way.

Transcript
Marc 0:00
And I'm not sure we have even entered the discussion of how human level AI can be beneficial to society. It's more about can we actually build it? And so at the moment, if we're only talking about can we build it? I don't see any benefit for society. The question is, once it's built, what can be done?

Reuben 0:29
Hello, and welcome to steering AI, where we talk about how AI will shape the future and how we can steer it along the way. I'm your host, Reuben Adams, PhD student in AI at UCL.

Today I'm honoured to be talking to the incredibly accomplished Professor Marc Deisenroth, DeepMind, Chair of machine learning and AI at UCL, and Deputy Director of our AI centre, where he leads the statistical machine learning group.

His research interests centred around data efficient machine learning, probabilistic modelling, and autonomous decision making with applications in climate weather science and robotics. Professor Marc Deisenroth, welcome steering AI.

Marc 1:06
Thank you for having me.

Reuben 1:07
So Marc, what have you been working on recently?

Marc 1:10
So actually, we had a paper deadline yesterday evening. And the paper is about putting a probabilistic model which is called a Gaussian process on some sort of like, extended version of a graph called a cellular complex. So that allows us to, to model more sophisticated graph structures using probabilistic models.

Reuben 1:31
And what was the motivation behind this work?

Marc 1:34
This was just more like a curious curiosity driven approach of like how we could generalise a Gaussian process models that live on graphs to structures that are a bit more general than graphs.

Reuben 1:45
So this sounds like quite fundamental work, what kind of applications do you imagine for it?

Marc 1:49
Yeah, so this is correct. So we this this piece is, is foundational, but the motivation behind a lot of our work, our applications around weather, but also in robotics, and this particular work could be applied to modelling spatial temporal phenomena, for example, ocean currents, or wind speeds on on a global scale.

Reuben 2:17
So my impression is that you become more interested in climate science and weather prediction lately, what's driven that development of interest?

Marc 2:25
So that is right, I started off, I think, maybe during my master's already to work on control theory moved then from control theory to reinforcement learning to robotics. And more recently, we kind of like switched gears to focus more on weather, weather modelling, or environmental modelling.

And the reasoning behind that is that I wanted to kind of like, broaden a little bit the scope of the research that we do. So for a while we had two strands of research. One is robotics as an application and the other one was going into kind of like spatial temporal modelling for for weather, for example.

But with limited resources, we needed to focus on one, or we chose to focus on one direction, and then we chose the spatial temporal modelling because that aligns a little bit better with our kind of like expertise in foundational research.

Reuben 3:27
So were there broader motivations here around contributing to climate science and how AI can can perhaps help mitigate the climate crisis?

Marc 3:36
Yes, definitely. So there is there's the motivation, obviously, that to some degree, we are a bit stuck. I don't want to offend people who spent decades on developing models to describe weather or ocean currents.

But there is definitely room for data driven models to bring some change or some significant change in the way we model these kinds of phenomena.

And hopefully, that will also lead to some some breakthrough when it comes to climate modelling, climate mitigation, resource allocation, and so on and so forth, and maybe even like policy development, but that's really a bit further down the road.

Reuben 4:21
Do you think better models is what we're really bottlenecked on?

Marc 4:26
Yes, and no. So there are, for example, existing models for weather predictions that work really well in the medium medium term forecasts and soak up just maybe a few days or so that works relatively well.

But when it comes to extremely short term forecasting, that's called nowcasting for maybe just an hour or two, these models are not overly good or when it comes to long term prediction, when it goes to like climate, seasonal predictions, then I think there's also some room for improvement, but also when it comes to modelling other things, for example, we've worked with the UK Atomic Energy Authority on modelling plasma flow in nuclear fusion reactors.

There are, you know, we face challenges that just cannot be described by the physics knowledge that exists at the moment. So when it comes to plasma turbulences, when plasma hits the wall of the fusion reactor, there are some limits to what we can model with the physics equations that that we can write down. And then, in that sense, some data driven methods can complement the existing methods.

So, I'm not necessarily advocating for replacing existing models, but actually work with with people who have spent a lot of time and effort and understand what they're doing to make existing models better, or to complement the existing methods in places where they just don't work that well.

Reuben 5:56
So sort of augmenting existing methods rather than replacing?

Marc 6:00
Yes, I think that's a that's a fair description.

Reuben 6:04
So is there a tension between the physics community in the machine learning community in terms of having a deep understanding of how the plasma evolves and moves around versus simply predicting how it moves around?

Marc 6:14
I think it depends on who you're talking to. So some people are a bit more open to looking at machine learning models, others not so much. But I think we also have to be as machine learners, we have to be careful that we were not invading other other areas so to say or just use a deep neural network and will solve all your problems.

I mean, that would be in a way really silly. I'd we have to make this a joint effort and be respectful of the knowledge that comes from from the other side.

Reuben 6:49
How does having a better model of the climate or the weather or being able to make better predictions, How concretely does that help with climate change mitigation?

Marc 6:58
Well, I mean, there's a difference between weather and climate. But let's say if we want to predict or to have better models for extreme events, that could be hurricanes could be floodings, then we could maybe send out some, some warnings to evacuate.

We had, I think last year or two years ago, there was a big flood in Germany that cost a lot of people's lives, or also lots of money, because of the destruction. We had regular floodings here in in the UK as well.

And if we have better forecasts, or more reliable forecasts, and you know we can we can work with them and then issue warnings. So that would be just one way where better modelling could be, could make a tangible difference.

Reuben 7:46
Yeah, for sure. You've been heavily involved at UCL with increasing diversity in the computer science department. How's that been going?

Marc 7:54
We're making progress. So I took on the role of the chair of the Equity Diversity and Inclusion committee, about maybe two and a half years ago, so that role didn't exist before.

Our focus, I think, for the first one or two years was race equality. But we also worked on disability issues, neurodiversity, LGBTQ+, and so on. But I think especially in the race, equality space, we made some tangible progress.

We're building or have built a community within the department, we identified issues amongst staff and students where people felt excluded or identified issues where there's room for improvement.

We created a specific scholarship for black undergraduate students, to support them to go through to an undergraduate degree at UCL. So there was a there were a couple of weird things that we discovered, for example, there were scholarships that only fund the first year of your undergraduate degree.

And then people wonder why people drop out after one year. It just doesn't doesn't make sense. So that kind of like, was consolidated in a way that at the moment, or now we only offer full scholarships.

And we also got additional money in to create these new scholarships, which are called Aspire scholarships. And we just had a meeting last week with the second intake, second cohort, and it was was really great.

So it's three scholarships per year for an undergraduate degree in CS at UCL. So that's just one one aspect where I think we already made quite a big difference.

But you know, in other aspects, we're also making progress, whether that is in LGBTQ+ where we have kind of like setting up a community as well with an allies network with people who want to get involved in creating a community so that staff and students, we're looking at also issue with disability access.

So you may find that shocking but currently UCL CS, although we are in 11 buildings, we don't have, at the moment, a single building that is fully accessible. That will change in the next couple of months where we get two floors in one building. Well, one floor in one in two buildings each, that will be fully accessible. But it's a sad state.

Reuben 10:29
Would you say there's progress being made there at least?

Marc 10:32
Progress made in the sense that there is a raised awareness and a big push definitely from the CS side, but we are not in control of the building. So that effectively then goes back to UCL Central.

Reuben 10:46
So one of your main interests is reinforcement learning, in particular, data efficient reinforcement learning, why is reinforcement learning so data hungry?

Marc 10:56
So if you, if you look at most reinforcement learning algorithms, they are model three, that means they don't use a forward model to do rollouts. And learning a forward model is very difficult.

So if you hypothetically, if you had an exact model that rolls out, or kind of like that simulates the the environment, even a sequence of actions, then we are basically doing optimal control, we are solving an optimization problem. And that's, you know, that's great.

Reuben 11:24
You almost wouldn't need any data at that point.

Marc 11:26
That's correct. So once you have a model, you don't need data. So once once you have an exact model, you don't need any data. So that that is correct. But in the sense that reinforcement learning doesn't make the assumption that you have that model, optimal control does make that assumption.

And so the question is, what are you going to do if you don't have a model? So one way is to just apply actions and kind of like, see what happens. And basically just save a big table of state action pairs that lead or give rise to rewards or kind of like maybe the successor states.

And that's kind of like what model three methods do. Or you can try to learn a model from state action and next state pairs, and then use that model as an emulator of your of your environment.

So now, you could say that building the model is is a great idea, because then you don't need the data anymore if the model is exact. But the problem is the model is not exact.

The question then is, what are you going to do about this? So one key thing that I believe needs to happen is to be aware of the inaccuracy of the model, and equip the model with uncertainty.

So that could be uncertainty on the parameters of the model, and then use those uncertainties during the planning stage or the rollout stage. And when you do that, there is some hope that you can actually get something done with less data.

So that's kind of like an you know, one way to do more data efficient learning, but you have to be very mindful about the quality of your model. But there are also other ways to kind of like speed up learning.

So there could be transfer learning, let's say you have already a policy learned for one particular task. And now the question is, can you transfer that knowledge to a new task that is somewhat related, so that will also speed, well, ideally, will speed up learning compared to learning from scratch and that kind of like those ideas are, let's say, independent of whether it's a model based on model free method.

There's a lot of work at the moment on offline reinforcement learning, we have some form of like, huge data set from I don't know previous tasks, some form of like data set collected that may not be necessarily related to your task that you're interested in at the moment.

But you can use that offline data set to pre train a policy, for example, and then fine tune using online data again, that will speed up learning, hopefully, compared to learning from scratch.

Reuben 14:12
So this is similar to the move we've seen in large language models, where the field has gone from modelling specific tasks with individual models, to having enormous foundation pre-trained models that can then be fine-tuned on specific tasks. It's obviously very difficult in reinforcement learning for a model to learn a model of its environment for each specific task separately.

Do you see a similar kind of move happening in reinforcement learning where we have enormous foundation models trained to learn enormous numbers of environments? And then we fine tune those and specific environments and specific tasks?

Marc 14:46
Yes, I think that is that is also happening in reinforcement learning. Basically, the large language model as a like a sequence model in reinforcement learning is by nature a sequential task. So there is a very, very close relationship.

Reuben 15:01
So with generative models, we're getting better now at synthesising training data for reinforcement learning agents, does this eliminate the need for data efficient reinforcement learning?

Marc 15:15
I mean as a as a way of generating more data so that that is correct. And in a way if that data is sufficient, you may not want or not need to learn models. I mean as I said earlier learning models is not entirely trivial.

And so, it seems like there's an easy way out by just just like in quotation marks, just generating more data using generative models. But the also need to train those models to make sure that data you generate is meaningful.

Reuben 15:50
So, you're known in the department as a big fan of Gaussian Processes. What do like so much about GPs?

Marc 15:58
I think one of the one of the nice features of a Gaussian process, I think you you mentioned this, at some point earlier, about interpretability of models. And a Gaussian process has only a few hyper parameters.

And you can, in many cases, actually interpret what these parameters do. For example, there's a length scale parameter that tells you about how how I don't want to say how smooth or kind of like about long term correlation of function values, like tells you something about that, or there's an amplitude parameter tells you about the amplitude of a function, so you can really explain something about the properties of the function that you want to model.

It's a periodicity parameter, depending on the kernel, you can interpret a Gaussian process. Additionally, a Gaussian process gives you uncertainty about well, it's a probabilistic model in a way that it has built in model uncertainty, which is really nice.

And given how powerful that is, inference and Gaussian processes are kind of like, I'd say, at least in the in a, in a simple setting where everything is conjugate you can do in closed form. So that I think these are really nice properties of Gaussian processes.

Reuben 17:11
And is this uncertainty calibrated?

Marc 17:13
So that's a good question. So empirically, yes. If the model assumptions are correct, the uncertainty is calibrated.

Reuben 17:24
Right, okay, so any error in the calibration is from model uncertainty.

Marc 17:29
It could also be the wrong prior. So for example, if I choose a prior that says, okay, my function is differentiable and my function is not differentiable, then the uncertainty is not necessarily calibrated.

Also, it becomes a bit more complicated in non conjugate models. For example, if we do classification, then things become a bit more complicated, because then we start need to start doing approximate inference.

Or if you move to sparse models, so then again, we have an approximation to the posterior distribution. So as soon as approximations come in, or the prior assumptions are not exactly right, then talking about calibrated uncertainty is a bit more difficult.

But then there are ways where you could post hoc calibrate uncertainty using for example, conformal prediction or other ways.

Reuben 18:24
So is it true that when you're writing papers on certain kernel methods, or Gaussian processes or whatever, that you have to sort of excuse yourself for not doing deep learning on this specific task you're doing?

Marc 18:37
No, we don't need excuses for this. Deep Learning has clear flaws. There's always an argument to use Gaussian processes, because in deep learning, it's extremely hard to get meaningful uncertainties out and even if you want to, it's super expensive.

Obviously, you can't solve these large scale problems using Gaussian processes, you know, that oh, that say the problems that you can use, that you can have that use deep learning methods for, but there are other problems that you can solve with Gaussian processes where I would definitely not use a deep network for. And so I think the kind of problems that you want to solve are potentially different.

Obviously, there are overlaps. But I don't need an excuse to use a Gaussian process. I think a Gaussian process has its strengths. It also has its weaknesses. And if you are when you write a paper, normally, I would say, you know, okay, deep learning has had some success in this area.

But you know, predicting uncertainties is tricky and expensive, but you could use a Gaussian process and you don't have to worry about various other things. There's obviously a computational issue in Gaussian processes.

They scale not particularly well, but there are, I think, some pretty good sparse approximations. of Gaussian processes, I would say the problem with large data sets, at least in the number of data points is more or less, not an issue anymore.

So we have seen Gaussian process models running on billions of data points, also training set sizes of of billions. What is still challenging is high dimensional inputs. So images, for example, would be, I think it's something like that, it's still a big problem.

If you go away from classification or a representation learning as if you look at regression problems, a Gaussian process, I believe, should be the first choice, at least when it comes to it's a relatively low or medium dimensional problems.

If you want to model functions that go from 5D to the real numbers, then a Gaussian process could be a very good and very powerful choice here.

Reuben 20:56
So you published this algorithm called PILCO in 2011, which is for data efficient reinforcement learning. I've heard that your MATLAB implementation of this is actually still a strong benchmark, even this many years later. And that it's in fact faster than many Python implementations. Is that true?

Marc 21:15
Oh, that may may be true. Yes. I haven't thought to be to be fair, I haven't I haven't touched this in a while. But every maybe every couple of years, I just run this just for fun on the laptop. Again, it's like, oh, wow, the hardware got faster.

Reuben 21:39
So you have several positions. You're the DeepMind. Chair, you're also the deputy director of the AI centre, you've got a whole host of Best Paper awards, promising young academic award from UCL. How are you making time for all of this?

Marc 21:54
I mean, most of this is because I have amazing students. It's not it's not my work alone, right. This is effort that comes from from a great team. I mean, it definitely from the academic side, right? The the paper awards. It's it's all down to amazing students and collaborators.

Reuben 22:18
So let's move now towards recent progress in AI. Has any of this caught you by surprise.

Marc 22:23
I think a couple of developments recently that did catch me by surprise. So one is ChatGPT. So that was surprising, especially also kind of like the size of progress or the amount of progress that was made and the impact of that that was surprising.

On a different maybe a different application was protein folding. That was also a great surprise. I mean, I think both of them caught me by surprise. I'm not following those directions too much. So I don't work in in protein folding or with large language models or language models in to begin with, so that I think these two developments were super exciting and surprising, in a good way.

Reuben 22:23
And was it GPT three that was surprising or ChatGPT?

Marc 22:40
I think ChatGPT was the one that made it to the mass market in the end, right. So you know, something that survives contact with reality for more than 24 hours. Is, is surprising. And you know, it's still still up and running.

Reuben 23:31
So what was so surprising about ChatGPT?

Marc 23:35
The fact that it could generate meaningful, boilerplate text was very, very interesting. And it's being used by so many people now. I mean, obviously, we hear about, you know, students using it for essays and that's fine. But we also use and right I have a, I have a student who has a startup who told me that he uses chat GPT to draft pitches for VCs, people use it for grant applications. So boilerplate text is great.

Reuben 24:11
So you call it boilerplate text. It can surely do a lot more than that, right? I mean, it can solve math puzzles. It's done quite well on the on various academic benchmarks and coding tests. Did you use that word on purpose, boilerplate?

Marc 24:26
I mean, I was more like on on text generation. I was not talking so much about coding, or solving maths puzzles. But yeah, using it as a as a tool to draft some code? Sounds like a great idea to speed up the development process.

Reuben 24:50
Do you imagine future language models being good enough that they're not just drafting, but correcting the draft and improving and iterating to the point where the human involvement is shrunk to a much more minimal level?

Marc 25:05
Yeah, I think we are not, if we are not there yet, we are definitely not far right. You upload a text and say, you know, simplify this text, shorten the text. And immediately you have the answer. I mean, that already exists.

Reuben 25:19
Yeah. So once you've got this more deliberative thinking, we've got a back and forth, back and forth between the model and its own output, then, it seems like a lot more jobs are now up for grabs that could be automated. Do you find this concerning at all? Or do you think it's just the case, they'll be new jobs, just as every previous new technology?

Marc 25:41
Yeah, there's always I mean, there's always a discussion about job replacements, some jobs will disappear over the next decade or so I don't think it's going to be imminent. But new jobs will also be created. And the tools that are being developed these days, also can enhance existing jobs or modify existing jobs.

For example, I recently attended fashion meets AI workshop here in London, where designers of clothing used generative models or generative AI to draft dresses, and then they got modified by the designers to get them up to scratch but as a draft, and I don't want to say boilerplate again, but you know, as a draft, this was great.

And the fashion they displayed was amazing. So that comes off as you from somebody who has no sense for fashion. I thought it was really amazing. But again, I believe there will be the creation of new jobs that don't exist yet, in order to support this kind of system, which didn't exist before.

Reuben 26:55
Do you have examples in mind?

Marc 26:58
I mean, there's always a question of maintenance. A few years ago, I visited. So my family comes from a rural area in Germany. And for a few years ago, I visited a farm where they had a fully automated a cow stable with one employee, and 120 cows for milking and cleaning and everything and feeding.

I was actually super fascinated by this, because there was computer vision was involved, there was data analysis involved, to check automatically, by just looking at the milk, what you can analyse and see whether the cow is healthy or not. But you know, there's only one person left to manage 120 cows.

But then you also have to maintain these kind of robots. So that creates some new jobs, you have potentially data analysts that look at the milk, and, you know, data of the milk.

And so that you create new jobs, which didn't exist before. I believe something similar is happening with the language models or generative models that are being developed at the moment.

Reuben 28:11
So these maintenance or data analysis jobs, they sound like quite high skilled jobs. Is it possible that the new jobs that come into existence are all high skilled jobs that take a lot of training and a lot of education to do?

Marc 28:25
I'm not sure whether they are high skilled, higher skilled, maybe. So there will be there will be some training required. But I also believe that we develop tools that make it easier to do those jobs.

Reuben 28:41
Easy enough for an AI to do it.

Marc 28:45
I mean, sure, no other question is going the direction of like, do we need jobs? Or are we going to have jobs at all in the future? If, if there is an automated system that could just do all the jobs? That would be, wouldn't be too bad, I guess?

Reuben 29:01
Is this sort of your positive vision for an AI future?

Marc 29:05
That's a bit extreme. So I believe there will be a lot of downsides with this as well. And, again, so it could entirely be that if one doesn't have a daily routine, let's say a job is in a way a daily routine, that could end up with all kinds of like mental health problems or physical problems. It's I think it's quite complicated or difficult for me to, to speculate about this.

Reuben 29:33
So you mentioned that you see possible downsides as well in the future. What do you have in mind there?

Marc 29:42
I mean, the AI or machine learning methods can cause some problems. I don't think we have to look into the future. We can look at where we are right now. And you can use AI or machine learning technologies are being used to me marginalise under, you know, already marginalised groups, marginalise them further to kind of like, extend the power gap in a way between groups of people. So that is that is definitely something that is already happening. There is the danger of misinformation. I mean, if you look at, I mean, I don't have a Twitter account anymore, or X, it's called now.

Reuben 30:26
That's probably wise.

Marc 30:28
I left about a year ago, I think that was good for my sanity. But what I've heard is that there's a lot of misinformation going on, at the moment about political events happening. And, you know, that is, I think, largely driven by some form of like automated trolls.

We don't have to go into the future, I think we can already look at what's happening right now than in the future, there's much more potential for for danger, because techniques are being developed further and further.

Reuben 31:02
This seems to be a sort of cat and mouse game between production and detection. Who do you think ultimately wins in that race?

Marc 31:09
Well, hopefully, it's going to be a balance. It's almost like no virus and anti virus, it's, maybe it turns out to be something like that, where you have to find or well, a balance will be found, right?

Reuben 31:23
I mean, at some point, you can presumably draw, once you've sufficiently well learned the manifold of all possible images, you could just draw from that distribution. Surely at some point, there's just no way to tell whether that image, just a sequence of pixels, is genuine or not.

Marc 31:40
Possibly, yeah. That is, we may, we may end up in that situation.

Reuben 31:48
So you mentioned that AI could exacerbate power imbalances. Could you give an example?

Marc 31:53
I mean, many years ago, maybe that's not totally up. Well, it's not up to date. But vision systems had issues with darker skin tones or identifying women, there were some issues with systems that were inherently, were racist, also, when it came to court rulings, so there have been analyses that showed that these systems were inherently working against people of colour.

These are just two or three examples where automated systems were not working for the say general public, but only working for a subgroup or preferred one group of people over another group of people.

Reuben 32:41
There's an example, with image generation models, as well, where higher paid roles will be depicted as having lighter skin tones and lower paid roles pictured as having darker skin tones.

The problem clearly starts with the data, do you think it's possible to with fine tuning or waiting methods or whatever get rid of this bias in models? Or is it always going to be there in some form until we tackle the root cause with the dataset?

Marc 33:07
I mean, it's not it's not only the data, I mean, the data is one one aspect, but it's also the people building the models, right? If the people building the models are all, male, white, 20 plus year or 20, to 30 year olds, then there's also a problem because you don't get different opinions or different considerations into the model building.

So it's not just the data, it's, I would say, would pretty much go back to the pipeline and say not the entire pipeline is prone to flaws.

Reuben 33:46
And this is why it's important to have a diverse computer science community.

Marc 33:50
That's definitely one aspect of why a diverse community, or computer science community is important. But it's also not the only argument for having a diverse community in the CS, or in computer science.

Reuben 34:07
Do you think AI is currently overhyped or underhyped?

Marc 34:12
In a way, maybe both?

Reuben 34:15
Okay?

Marc 34:15
But yeah I'm not sure. I had a conversation yesterday with somebody about the AI hype, and then in the thick of the end, we decided hype is actually not really what matters. What matters is what is being delivered. So if you, you know, hype doesn't help. Think about what, what you can deliver.

Reuben 34:40
Okay, let's, let's put it a different way. Are people's impressions of what AI can deliver out of step with what it actually can deliver?

Marc 34:47
That depends on whom you're talking to.

Reuben 34:51
Okay, let's start with the general public.

Marc 34:54
I don't know the general public. But let's say people, the media, politicians, the general public, if you if you want to call it like this, many people don't really understand what is happening at the moment.

And I'm also not saying that I totally understand all of what is happening. But there is some form of like fear that I don't know, AI will take over the world and or some AI system will take over the world and humans are being destroyed and something like this.

Reuben 35:38
So I take it, you don't take this risk seriously.

Marc 35:40
Um, I would say we should maybe focus on other problems we have at the moment, you know, sure there is this, there is this, I would say low risk danger that this may happen in some point in the future.

But I believe there's far too much focus on this question. And not so much on where AI can help solving problems that we otherwise cannot solve. And these problems are literally burning problems.

If you look at climate change, for example. I think it's getting too much attention and too much focus. I'm not saying we shouldn't be paying attention to this at all.

But I really believe we should primarily focus or more focus on other problems where AI can make a positive difference, and not so much about existential risks.

Reuben 36:44
You mentioned it as a sort of low risk probability. In the future, what kind of timeframe are you thinking?

Marc 36:53
I don't know, I can't give you a real answer to this.

Reuben 36:56
Fair enough. So in terms of hype, what about in the market? Do you think that over invested in AI at the moment?

Marc 37:08
My first thought now was no, because otherwise it would have more grant money, funding for our research! But maybe I'm just doing the wrong kind of research I should work on like existential risks.

Reuben 37:23
Or large language models!

Marc 37:24
Or large language models. So maybe that I should just work on those kind of things.

Reuben 37:31
Well, about about large language models specifically, do you think the hype there is too much?

Marc 37:40
I'm not sure, actually. I mean, there's a lot of, a lot of interesting things have come out of those models, or generative AI models, and I think that is something to it. It's I think it's not the end of the story.

But it's, I mean, given the impact. I think it's it's fair to have invested or to invest resources into developing these things further. But I would say always keep in mind, there is other stuff going on as well. And it's not just this one thing.

Reuben 38:21
Yeah for sure. For sure. I think that's what we're trying to do in the podcast is represent all of these issues, and not make it an either or. I think we can easily take all of these risks seriously.

Marc 38:34
Yeah we have, we have lots of people working in in AI. And, and I mean, also given, I say post 2010 ish, the research community exploded, I mean, this is largely due to deep learning. I mean, I really don't want to admit this too much.

But it's, it's due to deep learning because deep learning did have a huge impact on the AI and machine learning community. You know, if you look like pre 2010, we were working with like small scale problems that we, the algorithms we developed we could actually understand, you know, the 10 parameters in our models we can explain.

And since 2010, ish, you know, the models became bigger, the datasets became bigger, the things we can do became more and more complicated and machining has very much turned into an empirical science, which is fine.

I mean, there's still people doing, like, theorems and proofs, but that community does or didn't grow that much over the last 10 or 15 years. Whereas the empirical science branch in machine and just exploded like exponentially.

Reuben 40:05
There's also exponential growth in terms of the size of the frontier models, the amount of compute being invested, amount of money being invested, you've got all of these things sort of exploding recently.

And we've had extremely surprising lift, from my point of view, progress in the last few years as well. If this continues, if these kinds of exponentials continue for a few orders of magnitude, things could get really quite crazy within a decade or two. Do you think much about like, where are we going? Are you excited? Are you nervous?

Marc 40:41
It's a bit, bit of both, right? I mean, if you look at, you know, 10 years back where we were 10 years, and you know where we are now. So now think about the mindset that we had 10 years ago, and kind of like, the things that we can do now were unimaginable, for most people, at least.

The compute power we had 10 years ago, I mean, I think, I think our phones now can do more than what, what we could do in terms of like, at least regular compute 10 years ago. And maybe in 10 years, the phone that you have in 10 years, will be able to train maybe GPT, two or so. You know, there will be advances, and it's very hard to predict into the future.

But we also have huge challenges. And we need to think about how to solve those, or address those challenges, maybe not solve them to completion, but at least make progress.

Reuben 41:52
So if you had a speed dial for the different areas of AI research, which ones would you want to dial up? And which ones would would would you want to slow down?

Marc 42:02
I would like to dial up machine learning for scientific discoveries. I think that that is great, or you know, scientific discovery or environmental science, I think that these are problems that needs solving, whether that is weather modelling, whether it's, you know, I don't want to say solve nuclear fusion, I mean, that's been being solved for the last 70 years or so, but, you know, making significant progress in that area could be interesting, because that would also be one answer to the energy problems or carbon problems that we have at the moment.

So, I would like to see some progress in that space. And you know, if we, if we can also make more progress towards curing diseases such as cancer, or I would say this is generally good for the people and planet. So this is, this is good. I would love to dial that one up.

Reuben 43:04
Can you imagine AI models helping with the core of scientific progress as in coming up with hypotheses, designing experiments, evaluating results?

Marc 43:14
Yeah definitely. I mean, I think a few years ago, I was working with people at CERN on problems related to the Large Hadron Collider, and we build just like a fast proxy emulator a simulator they had to kind of like detect whether some events happen in the in the LHC.

And the problem with the regular simulator is that you have to wait for quite a while to see the outcome of an experiment or hypothetical experiment. And with the emulator that we built, we could get a speed of a factor of ten-ish-thousand.

Reuben 43:55
Oh wow.

Marc 43:55
Which yeah, which which means you can ballpark probe this emulator, and get approximate answers for a lot of parameter settings, which you then can use to do experimental design. So, this, I think, this is something that was fun, I think is also useful, because that will accelerate scientific discoveries.

And similar, you can use machine learning methods to do experimental design, whether that is drug design or molecular design, whether that is even like infrastructure design, again, like in the for example, in the context of building power plants.

Reuben 44:36
So some of these some of these things that you might want to die law have dual use risks. So drug design, for example, a model created for drug design could also be used with a bit of tweaking to create poisons or toxins.

Marc 44:50
Yes, I Yes, that is that is possible.

Reuben 44:53
And robotics could also be used for warfare.

Marc 44:57
It's already been used for warfare.

Reuben 44:59
Certainly. Yeah. Yeah. And as they become more and more automatic, this could be an issue. Is there a way we can in some way differentially progress, these different fields so that the good guys win to put it crudely?

Marc 45:13
No. No, I actually don't think so. You have to put regulations in place, but I don't think so. I don't think you can prevent people from developing harmful things.

Reuben 45:29
Okay. So maybe then the question is for something like drug discovery, does the attacker or the defender have the advantage? Should we be asking these questions before we develop this kind of technology?

Marc 45:41
We already have chemical and biological weapons, nuclear weapons as well. So ABC weapons that already exists, they will potentially become more sophisticated. I mean, if you want to develop them, I don't know, on small scale.

But you know, we have poisonings happening all the time. So I'm not sure whether whether AI or machine learning methods will make a huge difference in terms of like the harm that can already be done.

Reuben 46:11
I mean, you could imagine a bespoke poison that only affects people of a certain demographic.

Marc 46:18
Yes, that is that is possible, and that will be genocide. But you know, we don't need poisons to commit genocide.

Reuben 46:29
Well, I suppose maybe it's like the disinformation. There's already disinformation, lots of people can do it. But if you increase, if you make it easier to use, if you open source it, make it freely available online, then you're just sort of cranking up this risk.

Marc 46:44
Yeah, sure. I mean, if you don't open source it, you rely on the, I don't know, mental state of the people controlling it. So it's, it's tricky, whatever you do.

Reuben 46:58
Mm hmm. I mean, there might be an argument to be made that we shouldn't open source a lot of these models.

Marc 47:05
And who is in control of the models, then?

Reuben 47:09
Well, sometimes it's better to just one government than every individual.

Marc 47:14
Oh, which one? Wanna trust the UK Government or the US government or and then change wait for the change of government. And then do you think you're going to revoke this or who makes that decision?

Reuben 47:28
Well, I mean, if the UK Government has control of one of these models, then at least they can, haha, you're cringing. This sounds better than everybody on the planet having one including terrorist organisations.

Marc 47:43
I can't comment on this on a podcast. Imagine this government has control or god oh my god, they can't even control themselves.

Reuben 47:55
But they at least don't have a death wish for an entire of humanity.

Marc 47:59
Are you sure? Have you have you seen the Home Secretary recently?

Reuben 48:43
So the phrase AGI sort of snuck into the machine learning vernacular, without perhaps the criticism it deserves. So in one sense is it's mathematically impossible. The no free lunch theorem says that you can't have one algorithm that can learn every possible task. But in another sense, we're an existence proof that you're going to have highly general intelligence. How do you feel about the phrase?

Marc 49:08
I don't think it's meaningful because it's not well defined. And so you, everyone who says AGI makes their own assumptions and interpretations of what it should be and in a way, I don't think that is meaningful. And there's also like the discussion about, bit more historical, about what intelligence is on intelligent algorithm is.

A few decades ago, people said, oh, yeah, if a thing can play chess to a reasonable level, then this is an intelligent system. Once that problem got solved, oh no, that's not really intelligence, let's do question answering, you know, and then

Reuben 49:42
Shifting goalposts right.

Marc 49:44
Right, so, exactly, you move the goalposts because you can't really define I mean, you can set targets right and then you redefine things. And AGI is I think is even more complicated than that. I also don't think it's, you know, maybe if people say AGI is just recreating human intelligence or something like this, if that is what general intelligence means, I think then just using a digital device will not succeed, because human brains also do chemical processing.

Right so digital intelligence in some way will not be able to succeed.

Reuben 50:21
Right, that sounds like cheating. You're saying there's some kind of intelligence, chemical processing going on in the brain that you wouldn't be able to replicate in silicon.

Marc 50:29
With a digital device, yes.

Reuben 50:31
Okay. But we can probably get quite far without that.

Marc 50:34
Possibly. But I'm just saying, I'm not sure whether the expression of artificial general intelligence is meaningful, or whether it's even something we should be aiming for. And maybe just saying, I just want to build something, a digital device, or a digital processor that solves problems. And I would just not go into that intelligence discussion.

Reuben 51:01
So when companies like OpenAI, or DeepMind, anthropic explicitly say that they're trying to create AGI, do you think they're just playing into the hype? Or are they confused about what the term means?

Marc 51:14
I'm just saying, I wouldn't do this. I think there is. It's a business model, you attract attention by saying these things. I also believe that, you know, some companies really believe that this is something they should be aiming for what they want to work on, then that's, that's fine. But I personally don't think it's meaningful.

Reuben 51:37
And what about the phrase human level AI?

Marc 51:40
How would you define that?

Reuben 51:42
So I can imagine sort of an extended Turing test, where you're having a conversation with an agent on a computer, all you can see his text. But you can give it any task you wish, and get back the results, as long as it's text-based. So as long as it's purely a cognitive task, not one requires embodiment. And then your job is to try and figure out whether it's a human or a model.

Marc 52:05
Okay.

Reuben 52:06
So I would say a model that can pass that as human level intelligence, would that be a coherent concept?

Marc 52:13
Maybe, but I mean, so I want to maybe also go back to something that I said earlier is like, what do you want to do with it?

Reuben 52:18
Mm, what's the goal?

Marc 52:20
What's, yeah what's, what's the goal?

Reuben 52:23
I think for a lot of people, it's just sheer curiosity, the thrill of it.

Marc 52:27
And that's fine.

Reuben 52:30
Fair enough?

Marc 52:31
You know, if that, if that, you know, if this is something people people are curious about, or excited about, then, you know, that's, I think that's fine. You know, we are researchers, you know, and we are also excited about things that probably nobody else cares about.

And we do these things, because we are curious, we are excited about these things. And so if human level AI, or AGI is something that drives progress, by motivating people to do something, I think that's fair.

Reuben 53:06
So if it's possible to create such a system, then because inference is so much cheaper than training, at the point of which we have enough knowledge and enough computation to train such a system, we would probably be able to deploy 1000s in parallel and having 1000s of these models that can do any task a human can do, I don't know, it's at least pause for concern, right?

Marc 53:33
I don't know. I mean, I think there is also something that we should consider, which is the physical world. You going to have like a million of these models, and then ask these models to pick up a cup. So the embodiment is, is a problem.

So when I say when AI hits reality, it's over. So as long as as an algorithm can just exist in a in a computer of some form, it's fine, right? There's a lot of potential that happens.

But at the point where you actually have to interact with the physical world, we face a lot of other problems. You know, chess playing, solved, but I can't move the piece. So I need one that gets a human person to move, move the chess piece.

Reuben 54:21
So you mentioned earlier the risk of disinformation and bot farms. I can also imagine the risk of hacking, or writing spoof tweets from government officials. These are all cognitive tasks that would fall under the umbrella of human level AI.

Reuben 54:35
Yes.

Reuben 54:36
I imagine you could wreak absolute havoc with just an internet connection. And 1000 of these models.

Marc 54:42
You can, right, and you can make people do things for you. You just manipulate people. That is definitely possible. I'm not saying right, I'm not saying this is there's there's no danger of, of having these manipulative models around.

I'm just saying If the algorithms were to act by themselves in the physical world, things become problematic. But you can always have or you manipulate people or other agents to execute whatever you want them to do, to manipulate them to, for you to do the things that you want to do, but can't do in the physical world, then, right, this is dangerous.

Reuben 55:27
Yeah, using humans as your hands.

Marc 55:29
For example.

Reuben 55:32
Do you think it's sufficiently dangerous that we should perhaps scrutinise companies who have this as the explicit goal?

Marc 55:39
I think a healthy amount of regulation is always a good idea.

Reuben 55:44
What kind of regulation Do you imagine could mitigate these risks?

Marc 55:48
I think that's a discussion that needs to be needs to be had with, with governments and, and also AI developers.

Reuben 55:58
So one possible regulation might be limiting the autonomy of these systems. There's always a there's always human oversight, that they are used more as tools rather than agents.

But of course, there's enormous economic incentive to create agents to have models that do things for you, without you having to check whether it's doing the right thing, without you having to spoon feed it tasks. The regulation seems like an absolute minefield to try and prevent these these kinds of things from happening.

Marc 56:31
I think that is that is correct. Yeah. But there are people out there whose job it is to do or to put regulations in place.

Reuben 56:40
Yeah, yeah. Okay, you're absolutely gonna hate this round. But what I'm hoping to do with all guests is to do a prediction round, where you give us your best guess, for different events. So this, this human level AI system, by what date, do you think there's a 50-50 chance that we will have created such a system?

Marc 57:06
Oh when you say by, it can happen before, let's say 2100. Could happen next year, but I'm safe with 2100 because I'll be dead by the time it happens.

Reuben 57:17
Haha. You think 2100 is the time by which it's about a coin toss, whether we've created it by then or not?

Marc 57:25
Ah no, we may or may be much further than that. No, I mean, maybe 2030?

Reuben 57:33
That's, that's er

Marc 57:35
Pretty soon.

Reuben 57:35
within our lifetime. Wow, okay. I didn't expect that.

Marc 57:42
Human level AI in the way that you defined it earlier, non AGI.

Reuben 57:49
Okay, let's go into that. What do you think would be the main differences there?

Marc 57:53
I don't even know what AGI is.

Reuben 57:54
Okay, fair enough, fair enough. Okay, and then what do you think the chance is of this being a net positive for society?

Marc 58:05
I was about to say zero. I mean, but I am not saying that yet. The question is, you know, what does it actually do for society? And we don't have that answer yet. Right? So you said earlier, okay, people want to, may want to create human level AI just because they're curious.

So that doesn't have any kind of societal benefit immediately. So if we can deploy these systems for society, then I see there's potential for net benefit. But for just like curiosity, I don't think there's any societal benefit. It's like, oh, if I want to invert this matrix, I'm curious of like, how I can do this by hand, it doesn't have any societal benefit.

Reuben 58:52
So this would be a political question, about how you actually integrate it?

Marc 58:56
It's, I think it's how we use the results from developing human level AI and make some form of like meaningful contribution to society. And I'm not sure we have even entered the discussion of how human level AI can be beneficial to society.

It's more, as you said earlier, so more about can we actually build it? And so at the moment, if we're only talking about can we build it, I don't see any benefit for society. The question is, once it's built, what can be done? But I don't think we have that discussion at the moment.

Reuben 59:34
And when you're putting a low chance on it being a net positive for society, are you putting significant weight on zero or negative?

Marc 59:45
Haha, right so now, now, it's good to have unsoaked calibrated error bars here. I mean, if the point is, we just develop human level AI, then there is zero benefit to society, the question is, how do we deploy it.

And if we deploy it, let's say if there is some human level AI built, we deploy it in a in a meaningful way, then there could be a net benefit. But there are also dangers, right, it's always like two sides of a coin.

Reuben 1:00:16
And if you were to make your all things considered best guess as to whether we'll deploy it in a in a useful way, or a dangerous way, or a neutral way.

Marc 1:00:25
Well, all things equal I stick with zero, well it's not negative, which could easily be negative. Yeah. So I would go with an average of zero and some rather large uncertainty around that.

Reuben 1:00:41
My guest today has been Professor Marc Deisenroth. Thank you for coming on steering AI.

Marc 1:00:46
Thank you so much for having me and thanks for a great conversation.

Reuben 1:00:51
You've been listening to steering AI from the Department of Computer Science at UCL. Subscribe for future episodes, join the conversation on Twitter with the hashtag steering AI. And follow us on the account @UCLCS for announcements of future guests, and your chance to suggest questions. Thank you for listening, and bye for now.