Talk 200

Sitting down with host Andy Spinoza to talk all things digital and AI are Dr Riza Batista-Navarro, Senior Lecturer in Text Mining at the University; Dr Mauricio Álvarez, Senior Lecturer in Machine Learning; and Dr Filip Bialy, Research Associate here at Manchester, Assistant Professor at Adam Mickiewicz University in Poznań, Poland, and Lecturer at the European New School of Digital Studies. 

Our experts discuss Manchester technological innovation from the Industrial Revolution to present day, including the development of the Manchester ‘Baby’ – the world’s first electric stored-program computer – and Alan Turing’s pioneering work in AI. 

They size up the ethical and political implications of AI and digital advancements and evaluate the University’s current position as a leading centre for progress in this field – aiming to drive innovation and leverage these powerful technologies for the greater good.  

Further information 

Find out more on: 

What is Talk 200?

Talk 200 is a new lecture and podcast series from The University of Manchester, launching to mark our bicentenary: 200 years of making a difference.

This year we’re reflecting on our past, celebrating our present and looking to the future – and Talk 200 invites listeners to be part of the journey.

Our podcast host, Manchester aficionado, author, and University alumnus Andy Spinoza will be joined by a diverse line-up of guests from our community – pioneering academics and notable figures, inspiring staff, alumni and students – to discuss topics such as health, digital and AI, climate change, and equality and justice.

[Music]

Hello and welcome to Talk 200, a lecture and podcast series to celebrate The University of
Manchester's Bicentenary Year.

Our 200th anniversary is a time to celebrate 200 years of learning, innovation and research.

200 years of our incredible people and community, 200 years of global influence.

In this series, we'll be hearing from some of the nation's foremost scientists, thinkers and
social commentators,

plus many other voices from across our university community, as we explore the big topics
affecting us all.

In today’s episode, we’ll discuss Manchester’s historical and current contributions in
driving the technological advancements that shape our world.

From the first stored-program computer and Alan Turing’s conception of thinking
machines to the cutting-edge research currently being done into artificial intelligence,

Manchester has long been at the centre of technological and scientific progress.

Can I ask you to introduce yourselves?

Yes, thank you. I'm Riza Batista-Navarro, I'm a Senior Lecturer in Text Mining.

I'm based at the Department of Computer Science in the School of Engineering here at the
University.

In my research and teaching, I focus on Natural Language Processing (or NLP) and Text
Mining,

and more broadly speaking, Artificial Intelligence (or AI).

My approach to AI is basically developing methods.

I specialize in a sub-area known as Information Extraction, that's what I did in my PhD.

But then I have this view of applying those methods to sustainable development goals to
try to achieve those SDGs.

So, if I were to categorize or to put a label on what I do, I would say AI and Text Mining for
social good.

My name is Mauricio Álvarez, I'm a Senior Lecturer in Machine Learning.

I lead some machine learning initiatives in the Department.

So, I’m currently the director of a new AI centre for doctoral training in decision making for
complex systems.

My basic area of research, especially is in a particular area of Machine Learning, which is
called Probabilistic Machine Learning.

So basically, Machine Learning is a sort of a black box model to learn from datasets.

The particular especial thing about probabilistic machine learning is that these models are
able to provide uncertainty quantification besides prediction.

So, it's not only saying this is what my model predicts, but I also can provide, give you a
measure of uncertainty about that prediction.

I develop those kinds of models for different types of applications: from healthcare to, you
know, following or trying to predict pollution concentration.

Hello, I'm Filip Bialy, I have a PhD in Political Theory and background in Computer Science.

I am a Research Associate in Digital Campaigning in the Electoral Democracy project,
which concentrates on studying how digital data driven tools impact campaigning around
the world.

My own research also focuses on how narratives about AI and politics may impact public
decision making and our understanding of technology and politics.

If we can start by thinking about the tradition of innovation at Manchester University, over a
200-year period,

advances in manufacturing and engineering and how the digital and computing area has
also followed that line of, with the 'Baby’ computer in 1948, led by Professor Tom Kilburn.

Mauricio, could you give us more detail on that?

I think the role of Kilburn and the team he was around to come across with the design of
the first store program computer,

and he was actually instrumental to founding our Department Computer Science in '68
here, and all the lead up to the commercial version of that machine.

And the role also that Turing had in collaborating with them, creating this extraordinary sort
of computer at the time.

So just to continue from that, it basically signalled the beginning of modern computing,
and it all happened in Manchester.

And it's quite a significant advancement because, prior to that, how you would make a
machine do a certain task is that every time you have to reprogram it.

If not for the capability of a machine to essentially have a memory and to store a program.

So, when the ‘Baby’ was developed, then that kind of proved to everyone that “oh yeah,
okay, it's possible to store a program for a computer or for a machine to have a memory”.

And basically, that's become influential.

I think it's worth mentioning that Manchester’s ‘Baby’ was developed as part of a project
which was funded by Royal Society in 1946.

And the grant recipient was Professor Max Newman, a mathematician from Cambridge
who was actually a lecturer at Cambridge at the time before the war,

when Alan Turing attended the University. And it was Newman's lectures that actually
inspired Turing to write his groundbreaking paper, which we now consider to be pivotal in
the history of Computing, because in this paper, Turing developed the so called ‘Universal
Turing Machine’ idea.

It was before the war.

After the war, in 1946, Newman received the grant and founded the computing lab here, in
Manchester.

And he invited Turing in 1948 to join him in this lab.

And it is very interesting to see the complex history of the attempts to develop the first
computer here because it was first driven purely by scientific academic interest.

Newman wanted to use computer to solve mathematical problems, but immediately,
engineers and also the government, who was interested in this new post-war Cold War era,
to use computers for other purposes joined in the effort and it created the great, I would
say, environment for many people to develop the ideas for the first computers.

I mean can we think about how the Manchester team: Freddie Williams, Tom Kilburn and
Geoff Tootill; how could they have been able to predict the far-reaching implications of
their work?

Do you know if there were any...they made any predictions at the time?

That was very interesting because, actually, they of course were driven by those pure
academic interests, but they also thought about applications of those machines.

And in 1951, actually Freddie Williams in his presentation at the inaugural conference of
the Ferranti Mark I, at Manchester, mentioned several items that among those will be, that
which will be investigated as a part of the project.

And as a part of developing the machines and I would just... just read out what he said “it
would be partial differential equations arising out of biological calculation”.

This is something that Alan Turing was interested in.

“Simultaneous linear differential equations and matrix algebra and their applications to
the cotton and aircraft industries and the electricity distribution”.

Which is immediately some real-world application. “Tabulation of the gear functions,
design of optical systems for your synthesis for X-ray crystallography, design of plate
fractioning towers, chest problems...”

So these are very different types of applications that could be conceived as part of
developing computers.

So, it means that the interest in real impact on the world was always there.

Can we talk about Alan Turing's legacy, starting with the Turing test, perhaps, and those
ideas that he first proposed?

I guess a bit related to what Philip is saying,

I think Turing basically anticipated AI. He started talking about ‘thinking’ machines and
then he came up with, as you said, the Turing test.

Which I personally think is very relevant actually nowadays. So the idea is that you have
two participants and you have a judge and the judge basically don't necessarily see the two
participants.

He doesn't know that one of the participants is a machine and the other is a human and the
judge essentially asks a question.

And then based on the answers, the judge basically... the test is whether the judge can tell
whether the machine participant is actually a machine or a human.

And nowadays we have kind of this technology all around us, generative AI and large
language models, or LLMs, as they're properly known.

And there's always this question of “how do I know if something, for example, was written
or generated by a machine rather than created by a human?”.

So, I feel like he's thinking around that, you know, the Turing test is very much relevant to
what we have nowadays.

Yeah, so to add to that, I think there is sort of an idea today that perhaps the Turing test was
like a historic sort of value, in the sense that it sort of posed a good research agenda for a
lot of people in AI to work on for the following years.

But, I think, beyond the Turing test, the problem of Artificial Intelligence is way more
general.

It's about whether we can actually create machines that are able to understand.

But there is a lot of controversy today as whether Large Language Models, for example,
what they actually do is that they simulate understanding rather than actually doing the
understanding.

And I think historically the Turing test is important because it's sort of, back then, trying to
define what is an intelligent agent. What he sort of says, instead of saying... going into the
definition of what it means to be intelligent instead of created this test to say,

well, this look at whether the computer program would actually be able to pass the test or
not.

And that sort of set the agenda for quite a few decades into AI.

But, in these days, we really... when you talk to one of these chat boats, you actually get
quite surprised by how sophisticated they are.

And a lot of people would say: “well, they just passed in the Turing test”; but actually,
they’re understanding, which is the most fundamental question.

And the creativity, perhaps. So I think there's an ongoing debate around whether machines
are actually creative or are they copying or kind of regurgitating, sorry, what humans do.

Can we discuss how the University has evolved its approach to technological research
over the years, from those early days to where we are now?

Well, I guess...I guess from Computer Science, I guess formalizing the education and the
training that we do for new generations, you know since the creation of the Department of
Computer Science in 1968.

And I just created all these training programs, getting the proper stuff to start teaching
those programs and getting this stuff, creating their own research groups and sort of
following that sort of way in which,

you know, research is done over the world these days.

So that tradition of creating that sort of embodiment where students will do your, you
know, from undergrads to postgraduate research, they will just deep dive into three years
of research and then working on very specific problems and then creating their research
thesis...

All of that has been...you know, has taken some time to sort of establish.

And there's one of the issues that the technology advances at such a pace that sometimes
courses may...it’s a challenge for the courses to keep up and the teaching.

I think definitely that's, that is...not a concern, but it is there. It's just something that we
need to kind of deal with, right? So the teaching of...so, drawing from my personal
experience, I'm teaching a unit called Natural Language Understanding, which is basically

drawing upon concepts from Natural Language Processing, and it's basically necessary
almost a year on year to update the material because of the, as I said, the advances in the
world.

So you can't keep the same material from two years ago because then you won't be
teaching your students what they need to know to become a practitioner.

So definitely that is a factor there, but also to add to what Mauricio said, I think the
University has always valued interdisciplinarity.

So, we've been talking about history, right? So clearly before mathematicians and
engineers collaborated to come up with the ‘Baby’, for example.

And increasingly I think, the way I look at it, the University has encouraged, you know, the
academics, the researcher to work on projects, collaborative projects, across different
faculties, across different Departments.

And I think that approach is quite important because it means that we might be doing
something kind of fundamental, let's say, you know, in Computer Science, but then there's
always an aspect of looking towards how that can be applied or how that can be used in
other domains.

And I think that's, I think that's an important kind of approach to developing this technology
here.

Yeah, I mean, all three of you are from outside, you started your careers outside this
country. So you must have had an idea of Manchester, which, you know, likes to define
itself as a, you know, a hub of digital innovation.

Is that, I'd be interested to know if that's been the case, that was how you perceive the city?

Certainly, of course, when I first came here to Manchester, I had this image from 19th
century of this city of manufacturing with tall chimneys and full of smoke.

And I was surprised how beautiful the city is. But I also was surprised how open the
University is and how much it cares about this kind of interdisciplinary work that is
necessary today, especially working in the field of digital... digitalisation of politics.

I couldn't work on that subject without collaboration with people from technical sciences,
from Computer Science.

So it is important that the University actually supports this kind of collaboration.

How did you perceive Manchester?

Yeah, for me, it was really important that critical must about Artificial Intelligence already
in the University. And I guess like the University making an effort to create, again, that
environment that will, you know... talent to thrive.

Students wanting to come here to do their PhDs.

Researchers want to follow their career here because they see there is a sizable group able
to take and make a contribution to our case, so the fundament...the research I do,
fundamentals of Artificial Intelligence.

And I think that's also possible because again, interdisciplinary diversity is also a very
important aspect for...

We’ve got big medical schools and the health and life sciences and spin outs... There's a
lot for digital technology to apply itself to, isn't there?

Exactly. Yeah.

Yes, it's also a bit similar for me. So, again, you know, knowing about Manchester's kind of
rich history also drew me to the city.

Like you, I also had this vision of a very industrial city, but also similar to Mauricio. I
basically... So I did my PhD in biomedical Text Mining.

So, I felt like, "Oh, I really want to come to a university where I will be part of a group who's
doing interdisciplinary work."

So, for me, that was really important, and diversity is also a key thing. So, you know, I...
coming here, I really felt kind of welcome, you know, to the city.

So, I think that that was quite important.

AI is changing our world in so many ways, isn't it? And I think we're on the cusp of some
really large-scale changes that the general public will start to appreciate.

So, can we go around the three of you asking you about your own research and the realworld applications of it?

Yeah, so I’ll make a start. So, I mentioned earlier that I specialize in an area in Natural
Language Processing called ‘information extraction’.

So essentially, briefly speaking, it's about extracting fine-grained information from large
amounts of text automatically with the view to be able to understand what's within the text.

So, one of my projects that I'm quite passionate about is how that information extraction
can be applied to try to lower the carbon emissions of the food that people consume.

And I'll try to explain a bit. So, I have this idea that when people cook at home or maybe eat
out, people have become more conscious about the nutritional value of the food that they
eat.

But maybe not so much about the carbon footprint of what they eat. And I feel like if we
integrate some AI into the analysis of recipes, for example, so when you cook at home, you
follow a recipe.

And then if you try to integrate AI so that AI can tell you, "Oops, this recipe is very high
carbon,

whereas this other recipe is lower carbon”. Then that could potentially influence your
decision making at that point: which food to prepare, which food to eat.

So that's one thing that I'm quite passionate about. I work on many other projects for me
too many, but just a couple of other examples.

So, I have a patient student, for example, whose focus is on analyzing narratives, so again
dealing with text to try to automatically identify indicators of forced labour.

So, this is in the modern slavery space. So, the idea is to, because you see, not everyone
can kind of detect easily whether, let's say, a migrant worker is going through kind of forced
labour.

So, we want to see whether AI and Natural Language Processing can try to understand
these texts to try to... kind of expose those.

And another project has to do with automatically identifying hate speech in Ethiopian
languages or rather in languages used in Ethiopia.

So, in social media, as we probably all know, unfortunately there's a lot of kind of
aggression and hate speech going around.

So, the idea is to try to use AI and Natural Language Processing to automatically detect the
occurrence of the hate speech to...
To flag it up to administrators?
To flag it up. Yeah, yeah.

So, as I said in the introduction, I sort of work in the fundamentals of Machine Learning. So,
this is developing new models to being able to extract, say, more knowledge or do more
with fewer data, Basically.

So, I work both in the Big Data regime. I have applications there, but most of the
developments I do are the sort of Low Data regime problems where you don't have a lot of
data points.

So, one of the applications I’m working on at the moment, for example, is in drug response
curves.

So basically, this is a joint collaboration with colleagues in the Imperial and Singapore,
where we have these data sets where basically we're able to track what is the response of
a cell line to a particular drug for cancer, so different types of cells associated to different
cancers.

So basically, you think about these are curves basically that can be plotted in terms of the
amount of drug that is applied to the cancer cells and how many cells disappear with that
drug.

The problem is when we have a new cancer, then these pharmaceutical companies, they
have to do a lot of different tests again, and this is very, very expensive.

So what you would like to do is to sort of create these new maths, these new algorithms,
that will be able to tell you what we know now, and based on that, be able to generalize the
most with the fewer data samples that we can provide for a new drug, and then try to
reconstruct the whole curve, just using just a few

tries or samples from that new drug, and that basically we have a great impact because it
allows us to reduce the amount of, well, particularly for these pharmaceutical companies,
the amount of tests that they need to do in the lab.

So not just a reduction in cost but also in terms of time.
Exactly, and lab effort...lab efforts. So, in Machine Learning, we tend to call this area
‘transfer learning’. I 's like “how do you learn in one sort of dataset, but then are able to sort
of extract from there and use that knowledge to apply it to a different dataset?”.

So “how do you transfer that knowledge?”. Following on that similar idea, we...we have
this collaboration with Uganda, in Africa, a university there, the University of Makerere.
Uganda is one of the countries that is more affected by air pollution.

So, they cannot afford to buy very low-cost sensors, so these are sensors that are really
cheap to buy, but actually these sensors are very expensive, the good ones are very
expensive. So it’s again the idea of how do you create a sort of a joint Machine Learning
model that would be able to learn both from the low cost sensors, but also from the few
high quality sensors, such that you...learning from all of those, you are able to improve the
type of predictions that you could get, if you would get actually the high-cost sensors that
you can’t really buy.

So that there's sort of the... kind of research that I'm interested at the moment.

Thank you. A couple of great examples there, thank you. And Philip, your areas is AI as
applied to the political arena.

It is AI and other digital tools. So, within the project ‘Digital campaigning in the electoral
democracy’ led by Professor Rachel Gibson, we are trying to answer

The question that gathered a lot of attention, public attention in the last decade, I would
say, after 2015 elections in the US with so-called ‘Cambridge Facebook... ‘Cambridge
Analytica Facebook scandal’, in which there was this process of targeting voters or microtargeting voters using advanced data-driven tools.

And the question that we want to answer is whether it is actually the case, whether
political campaigns are using those tools.

So, I think it's of public interest as such. But since then, of course, we have witnessed the
introduction of generative AI with ChatGPT being introduced in the fall of 2022, I think.

So now we added to the problem, the use and misuse of AI powered tools. So, I would say
that it immediately has this value of being interesting to the public.

But in more general terms, what I'm working on is also about looking at the narratives on
how AI is impacting politics and democracy, because we hear those stories all the time.

Is it actually the case? And it is important to verify such stories because they can impact
decision making, for example, on how government...this government here in the UK and
other governments will react, what they will introduce, what policies they will introduce in
order to prevent the abuse of AI.

And if the impact is actually not that big, and I would say that, for now at least, it doesn't
look like it is significant, we may need different type of legislation than if it was actually
impacting the electoral process significantly.

So, your work really feeds into the debate about the ethical implications of AI and, I mean,
within the political arena.

Yeah, so ethics, which has been there for a long time, has now gathered this renewed
attention, I would say, after 2015, after a famous AlphaGo, which was this AI that was able
to win in Go, which is a game much more complex than chess.

So since then, people started thinking, well, maybe we should pay more attention to
development in this field because they may impact society.

And I would say that it is important to look at that, look at, as I said, those narratives about
the impact, and also look at how it changes the way people act, not just in the sphere of
politics, but elsewhere, how it impacts the relationships, how it impacts the way we work.

Whether we should be afraid of being replaced by AI.

So, these are societal implications.

I would say the field that I'm working on and the field that has a lot of people now being
focused on very particular problems is called ‘societal impacts of AI’.

So, it is broader than just ethics because it includes economy, it includes politics, and of
course it must be based on some ethical consideration, but it is much broader now thanks
to those recent developments.

And it's a broader conversation about whether it's even possible for regulation, you know,
for governments to regulate or for sort of business to regulate itself.

I mean, can I just throw open that idea? Is regulation that AI even possible, desirable?

I think that, I think, yes, in the sense that as far as I'm aware, there is already a European AI
law which is looking into regulating this kind of AI-based applications.

And as far as I recall, how it's structured is that basically it outlines certain criteria that if
what you are developing as someone who's kind of working in the space of AI falls under
certain criteria, then, you know, that might be not allowed at all, for example.

So as far as I recall, an example of that is an application that is essentially monitoring or
surveilling people.

But then if, let's say, what you're developing as an AI practitioner is kind of, let's say, at a
lower risk, then it might be allowed, but again, under certain parameters.

So, I think it should be possible. I think the question is more like the regulation might be
there. It's more about how you enforce it.

We have laws against cybercrime, but you don't stop people from doing cybercrime.

And there's also the tension between, again, enforcing it. And also, there's pressure from,
let's say, the private sector, from industry, because obviously they have their own
interests, and that might not necessarily align

with what's in the regulations. They will always lobby or push back on those regulations,
and they're quite influential.

So that's also a reason why sometimes the regulations are not, let's say, properly enforced.

Yeah, and it also leads to the ability of the public to have access to those tools.

There is now a division of a camp that thinks, "Oh, we should heavily regulate AI because
otherwise these AI's might become conscious or autonomous at some point."

And then we won't be able to control them. So only...

So Elon Musk scenario...
Yeah, exactly.
...Most of the public, if they've got any understanding of AI, it might be informed by some of
his nightmarish predictions.

Yeah, exactly. But then on the other hand, we have this movement of we need to have open
science. We need to have open AI.

We need to have open, not a company, but the project where public can actually access
these tools and know what's underneath, I mean, what these tools are, how they work.

And it also comes down to the difference in, you know, budgets that each of those projects
can have, because while private companies can invest billions of dollars,

I don't think so much governments are doing the same. They're doing some investment, but
they're not going to be at the same level that this private company is doing.

Well, that moves nicely onto my question for Philip, really, in terms of what are the
implications of power for AI on democracy, if that AI is controlled by big private sector
interests.

So that is why it is interesting to look at how governments and international organizations
are trying to regulate AI, because it actually says a lot of how much they are afraid of those
impacts, but also how sometimes misunderstood the idea of technology and relationship
of technology in society is.

So, speaking about the European Union's approach to AI, the introduction of AI Act this
year. Well, it solves some issues, but there are also some problems with that, because it
treats AI as just one thing.

But AI is not just this underlying technology based in Machine Learning, but it is a variety of
different applications. Some of them are quite important to develop because they can help
in medical research, for example, but some of them are purely commercial.

And then there is another layer of how people will use those tools, because you may have
this intended purpose of using ChatGPT to learn something to maybe write something for,
let's say, marketing purposes, and it is something that people are doing right now.

But it can also inform some of the dangerous activities, and it could be used to automate
propaganda and disinformation.

So, there are many different applications of the technology, and it would require separate
regulation of each sphere.

So, you say that the EU approach is too crude and not nuanced enough to deal with those.
You tried to balance two things: innovation, support for European companies that are
trying to develop new AI systems, and safety for consumers, because that is another thing.

EU tried to regulate AI as a product or AI system as products. So, the approach was not
actually to prevent harms to democracy as such, but to create a safe market for the
products to be sold and developed.

We are still living in a world in which governments, and political parties for that matter, if
we will look at the manifestos of political parties, for example, in the last election in the UK,
the manifestos are very vague about AI, very uncritical also about the AI potential to
change the economy and so on.

But we need more attention to particular misuses of AI, and it is still not properly regulated.

I mean, you mentioned the manifestos, and there are several significant elections across
the world this year.

In your sort of monitoring role, can you see that certain elections have had AI applications
involved in them or what's your general feeling about AI in politics this year?

In general, I would say that we need to resist this temptation to look at the relationship
between AI and politics in this manner that is informed by so-called ‘technological
determinism’.

So, technology is certainly impacting democratic processes, but it is not the only force.

If we look at elections in different countries, we may observe that in those countries that
have more established democratic system, the impact of malicious use of AI is much
lower than it is in less established democracies.

For example, in 2023 in Turkey, the impact of deepfakes, those videos that could be
created and are generated with the use of AI was significant.

Some commentators said that because the deepfakes was used to force one of the
candidates to resign, then the results were different than expected.

It is much different in democracies such as the UK's democracy or in the US, also in
Poland, in my home country, where we observed that political parties used AI, for example,
to generate the voice of their opponents from the other side of the political spectrum.

So, it was certainly used, but the impact is not just technological.

It is about the fact that countries, democracies have histories, and in the UK especially,
what are the factors that impact the election are, or the fact that the country went to the
period of 14 years of conservative government,

Brexit, Covid, austerity, and so on. So, these are those societal political processes that can
be maybe somehow used by people who then will use technology to somehow attempt to
impact the voters,

but those societal political processes are too complex to be just impacted by technology.

Thank you. You mentioned deepfakes and I don't know.
For us, education has become also a very important issue because now when undergrad
students or postgraduate taught students need to write their dissertations,

then they have the use of the help of generative AI.

Sure, well let's talk about that.
And then for us, it's not so much as deepfakes, but actually these students are producing
these work themselves,

or they are getting help from one of these generative AI?. And I think it has sort of taken and
shaken down all the Higher Education system because different universities have different
approaches to how to deal with these,

and I think this is still an open discussion.

So, what you want or what we would like nowadays is a tool which would detect whether a
piece of text is actually written by a human or a machine.

And such tools, you know, researchers have been working on that problem. The question
being, is it possible to develop a tool that can reliably detect the difference?

And I don't think it's quite there yet from what I recall. It's still an open problem, as Mauricio
said. So, it is still a challenge.

So, as you said, the impact on our teaching is that, I don't know about you, but for me, my
assessments, for example,

I've steered away from kind of essay based, kind of written work because not that I don't
trust the students,

but it's just to kind of say, okay, let's just go for oral presentations.
[can't hear what presenter is saying –be fair to everybody?]
Yeah, exactly. Yeah, yeah.

So, because, as you said, you want to be giving marks based on what your trust is their own
work.

So yeah, so definitely, as you said, it's shaken up how we do things.

Can we see even a return to real time in person, old style, sitting in an exam hall with a pen
and paper to really test students' knowledge this day and age?

Yeah. Well, it's a good question because then you think, okay, how we should actually
assess students,

because then these students are going to go outside to the real world, and what are the
actual tools that they're going to be using?

So what is that we actually need to teach these students? What is that we actually need to
learn,

that's going to be effective for the work part of the workforce later?

We can't integrate deep area of what is knowledge in this day and age.
But if I could maybe jump in on this point because I developed this metaphor,

I don't know whether you will find it useful, in which we may contrast our current approach
to teaching as this ‘Robinson Crusoe’ paradigm.

Then we think about our students that they should be able to somehow survive if they will
find themselves on this island without any tools,

without anything to help them. And the other approach would be to use this concept of
‘extended mind’,

which was proposed by a few philosophers in the 1990s. And they said, if something is at
our disposal, like a notebook with the address of the gallery that I want to go to,

it is as if it is a part of our own mind. So maybe now with those tools that are constantly
available to us,

we should stop thinking that only what matters is in our biological brain. It is also in our
tools that we have in our disposal.

And we, as you suggested, maybe you should adjust the way we approach teaching,
knowing that students will go and, in their job, they will have access to those tools.

So, we should not take away those tools, as you would suggest. Although I certainly agree
with what you would try to achieve,

we want them to learn for themselves and to think without any additional help.

Because an old-fashioned response would be "If you enable them to find every piece of
information through devices, then it takes away curiosity."

So that is how we need to speak to them openly, to tell them “if you will start using AI from
the first day at the university,

you will basically do harm to yourself”. It will not be that you will cheat anyone, because
you will be cheating yourself actually in the end.

So how are we training the next generation of AI students, researchers, scientists?

Yeah, so I think it is very important that we at the University are keeping sort of the train at
two levels.

So, one level is sort of on the fundamental level. So, we have the Center for AI
Fundamentals that wants to create these critical maps of researchers on developing sort
of models and algorithms.

But also it is how the AI sort of applied to many different sort of engineering areas and
science.

So, we recently got funding to fund this UKRI AI Center for Doctoral Training on decisionmaking for Complex Systems.

So, the plan is to train 72 new PhD students in the following five years.

We are going to have five cohorts each year, it is going to have around 14 students or so.

And then the idea is that we are going to be forming these interdisciplinary teams between
experts in AI,

but also, experts in particular sciences, for example in astronomy, physics, engineering,
biology, material science,...

And I think looking ahead, what we are seeing more every time is how AI is sort of impacting
a lot of different domains outside of the traditional ones like Computer Vision and Natural
Language Processing.

We are now seeing how we can, you know, like efforts to actually apply AI for sort of
material discoveries, for example drug design,

which were areas perhaps were not so in the past so closely related or people didn't think
“oh, we could apply AI for that”.

So I think interdisciplinary is something really important for us and Manchester and also at
our Centre specifically.

Thank you. And Manchester has been recognised by the government, hasn't it, with
something called the Manchester Prize from the UK's Department of Science, Innovation
and Technology.

Riza, could you tell us something about that?

I think it's very nice to be recognised, you know, for a price like that to be named after
Manchester, again in recognition of Manchester's role in digital innovation,

computing and AI.

It will happen in several rounds. The first round was focused on, as far as I know, energy,
environment and infrastructure.

I think it was very popular, I think more than 400 applicants applied.

The short-listing of the 10 finalists, so they have, they get...they got, rather, 100k pounds
for kind of, you know, starting up the work.

And then the grand prize winner will get, I think, 1 million sometime next year to, yeah,
basically to further develop the technology.

But essentially, it's a very nice way of, again, a demonstration of how AI particularly can be
applied to certain areas like energy, environment and infrastructure to make this really kind
of huge social impact.

And I think, yeah, it's a good way, again, of kind of fostering, because it's open to anyone.

So, it's not just for academics, researchers... it’s for start-ups and companies, so it's a nice
way of kind of fostering a lot of activity in this area of different applications of AI here.

Absolutely, and we, the University, we've got a kind of a platform called Digital Futures.

It's one of the three platforms in the university, so the Digital Futures one essentially brings
together academics, researchers, students working in the space of computing, data
sciences, AI.

But the nice thing is that they have themes which, again, highlight how this kind of
technologies can be applied to, let's say, healthcare, or let's say, again, sustainability and
things like that.

So it's kind of, it's not, it's a nice way of networking, you know, with people from across the
different faculties.

Exactly, yeah, so it's a nice way of meeting people and forming new collaborations across
the different departments and faculties in the university, yeah.

So can I ask if, you know, we're all excited about the potential for digital and AI, or are you
concerned about implications of AI for society?

I think that as many people I am excited when I see how powerful those new AI power tools
are, how generative AI could be used for supporting learning, for supporting, not replacing
human creativity.

I think that is important to stress. But I am, of course, also concerned, for example, about
how the benefits from the technology are being currently distributed.

So, there is a lot of inequality in the world and the new advancement in AI generate profits,
but it is as always as usual, profit for a particular group of people based in Silicon Valley,
not everywhere in the world.

And also the costs of the technological progress. People tend to think about technological
progress as if it is the same as societal or moral progress sometimes.

It is different, it is going with... it is relying on completely different logic. And just because
we have more powerful tools, it doesn't mean that we will be able to solve those societal
problems.

So I am worried that people may start to think with this technological solutionism in mind.
So, they will think that because we have the technology, now we are able to solve deep
problems with inequality, exclusion, discrimination and so on.

But what we see thus far is that technology may actually contribute to those problems and
make them more severe. So that is something that I would be worried about.

Yeah, so I think there have been a lot of extraordinary progress and things like, for example,
AlphaFault is many years sort of a mystery or sort of challenge of

how do you actually create the three ideas structured using raw data? I mean, those kinds
of advances, I think they are going to be... come perhaps more often.

And I think that is mind blowing because the amount of applications and solutions that can
come from that, for the general benefit of society are huge.

I think one of the things that worries me is how do we researchers make our products sort
of reproducible because one of the crises that we see for AI these days, or starting to see is
that a lot of people creating models or applying, you know, publishing papers

that people can't really reproduce. And there is a lot of concern that we don't really not
doing science anymore.

And people are just for the sake of doing AI end up doing things that are not the adequate
response or the adequate way of doing... solving a particular problem.

So I think we need to do a lot of work in terms of getting new generations trained in the best
practices so that when they get pro-AI products out there,

they're sort of conscious of all the aspects, reproducibility but also the ethical
implications.

Yes, also I am also optimistic in the sense that I do think that these kind of recent advances
in technology, they bring a lot of benefits.

So, you know, you spoke about like drug development earlier as an example.

So, I think AI is increasingly being used in that space and I think, you know, I feel like maybe
in the near future with the help of AI, better drugs will be developed, drugs which can help
cure diseases that are...that remain a challenge today.

So looking forward, what do each of you hope our third century will bring in your area?

Yeah, so I'm hoping for even more interdisciplinary work and collaborations where kind of,
you know, we develop methods, technologies, models from the AI side of things and then
we continue to closely work with collaborators and colleagues from other disciplines

to see how these methods can be applied in those particular areas but also particularly I
did mention at the beginning that I'm looking at my research as something that potentially
can be applied to sustainable development goals, or SDG.

So I am hoping with the University's research beacons as well that there will be even more
space, more scope to kind of show how our research ticks those boxes when it comes to
those SDGs.

One particular area especially is, I think this is also emerging now, like ‘sustainable AI’ and
‘green AI’, right? So it's not just about how the methods or the models we develop in AI can
be applied for sustainability but also making sure that the models we develop are actually
sustainable.

Because, you see, these models they come with the cost as well, right? So, when we
develop and we train these models, we use a lot of compute and so on. How do we do that
in a more sustainable way as well?

So, I think that considering that the University did very well, you know, in sustainable
rankings, it would be nice to kind of marry that, you know, where our research is really kind
of contributing to that sustainable.

Maybe the engineers can work on ways of data sensors do this computing with less energy.

Yeah, I think I would like to see more research going on explainable AI and trying to really
understand what is underneath. Open the black box.

I think that would be for me... like the future. Because I think until, there’s not...if we don't
have as humans, confidence in what these systems are doing, I think it's going to be really
difficult to convince people

this is a good way to have as a helper or replacement for many different aspects.

So greater trust among population...
Exactly.
...about AI, how it works...

Exactly, and the truth is that we don't know. These days, we don’t know even if you speak to
AI engineers, very experts on LLMs, they really don't know what's happening inside this
large neural network.

So I think as we understand more into... going into the future and we understand more how
these systems actually work, that that potentially can bring a lot of different changes in
how people actually perceive these AI tools.

But my hope for research in that field is that, with growing awareness of what is the actual
impact of AI and other digital tools on politics and society, we will be able to somehow also
have more democratic control over it.

Because in the end, what we are trying to do is to gather knowledge and give that
knowledge to society.

We could decide what we will do with the technology. And I would hope that democratic
control over the development of technology will also be an issue, an important issue in
politics.

Well, that's a very fitting and hopeful note on which to end. So can I thank all of you for
taking part in this session.

To stay up to date with everything Talk 200, be sure to follow and subscribe to the series on
the podcasting platform of your choice.

Head to manchester.ac.uk/200 to find out more about this series and all the activity taking
place across our Bicentenary year. Use the hashtag #UOM200 to engage with Talk 200 and
our wider Bicentenary celebrations on social media.

Thank you for joining us for this episode of Talk 200, a University of Manchester series.

[Music]

[BLANK_AUDIO]