The Deep View: Conversations

For Episode 2 of The Deep View: Conversations, we flew to Abu Dhabi to sit down with President Eric Xing of the Mohamed bin Zayed University of Artificial Intelligence, the world’s first AI-only university. We talk about everything from his unconventional journey to MBZUAI, misconceptions about the technology, the reality of progress in the field and the idea that, like the printing press, AI might usher in a new age for humanity, the age of empowerment.
 
SPONSOR:
Google Cloud: For all your cloud needs.
Go to https://cloud.google.com/startup/apply?utm_source=cloud_sfdc&utm_medium=email&utm_campaign=FY21-Q1-global-demandgen-website-cs-startup_program_mc&utm_content=the-deep-view&utm_term=-

EPISODE LINKS:
MBZUAI appoints Eric Xing: https://mbzuai.ac.ae/study/faculty/professor-eric-xing/
Eric Xing addresses the class of 2024: https://mbzuai.ac.ae/news/presidents-address-to-the-class-of-2024/
The printing press and the age of enlightenment: https://www.history.com/news/printing-press-renaissance

OUTLINE:
00:47 Introduction to MBZUAI
02:49 The Unexpected Offer
05:19 Making the Decision
10:01 A New Cultural Moment
13:28 Starting at MBZUAI
14:35 The Age of AI Empowerment
21:44 AI’s Potential and Risks
29:19 Regulating AI
35:40 The Promise of AI
43:39 AI as the Modern Microscope
45:19 Limitations and Misconceptions of AI
47:57 The Role of Language in AI
50:26 AI Literacy and Public Perception
01:01:00 Open Source vs. Closed Source Debate
01:15:01 The Future of AI Education
01:22:31 Conclusion and Final Thoughts
 
SOCIAL LINKS:
- X: https://twitter.com/thedeepview
- Instagram: https://www.instagram.com/thedeepview.co/
- LinkedIn: https://www.linkedin.com/company/the-deep-view-ai/

Creators & Guests

Host
Ian Krietzberg
Editor in Chief @ The Deep View
Guest
Eric Xing
President @ MBZUAI. Researcher, educator, entrepreneur, and administrator in computer science, artificial intelligence, and healthcare.

What is The Deep View: Conversations?

Artificial intelligence is a complicated topic, bound by many complex threads — technical science, ethics, safety, regulation, neuroscience, psychology, philosophy, investment, and — above all — humanity. On The Deep View: Conversations, Ian Krietzberg, host and Editor in Chief at The Deep View, breaks it all down, cutting through the hype to make clear what's important and why you should care.

Welcome back to the DeepView Conversations. Today for our

second episode, I traveled all the way from

New Jersey to the United Arab Emirates, specifically

Abu Dhabi, which is the home of the Mohamed bin Zayed University of

Artificial Intelligence. This is the world's first and only

university that is focused exclusively on AI research. And

right now I'm speaking with Dr. Eric Jing, who is the

president of MBZ UAI. Eric, thanks so

Well, thank you for traveling all the way here. I appreciate your diligence

and I love to chat

So I want to start with you, right? And as we've kind

of talked about before, the cameras started rolling, right? Your background

is in research, and you've been a professor, and you still kind

of wear that research hat. How do you go from a professor

of computer science to the president of MBZUAI

Well, to be honest, I never thought I'd become a

president of a university at any time. I'm

still right now affiliated with Carnegie Mellon. I've been working

as a professor in Carnegie Mellon since 2004. I

advise students, I teach classes, I

write grant proposals, and write

papers. In fact, I'm still doing all this right now, except

for classroom teaching. So research is very, very strong, very,

very important for me. That's why I haven't

thought about becoming a university administrator. During my career, when

I became senior, I did receive numerous invitations

and requests to be department chairs, deans,

even in one case another president, in

different institutions in the United States or in

UK and other places in the world. And I

didn't leave Carnegie Mellon. So

when I got an invitation for interviewing

here, I thought it's maybe

a learning experience for me to see the

place and also to maybe offer my my

insights or maybe experiences or opinions to

the leadership in here. During the interview, my interaction

with the host here is mainly based

on my identity as a researcher, as

a developer, and as a scientist. So I give them

basically the viewpoint you

know, from that angle, you know, how a university should be

built, should be operated, and what's

the best function and reputation

for university. And I also know that I'm

not the only one they interviewed, you know, they interviewed quite a few people. All

of them are much more senior and and

recognized that me. So I wasn't taking it too seriously

to be a real job interview. It's really an exchange, an

intellectual exchange. So I went back home. But then After

some time, to my surprise, they made an offer to

me. I wasn't ready at

all to take this offer because at

that time I was still very busy working with my students

and working on a few exciting projects. That was

at the beginning of the COVID. In fact, the

trip to UAE for the interview was the last trip I made

before travel shut down. in the States and

around the world. As we entered the

COVID season, things changed a

lot, even back home in the States. You

are locking down at home. You don't really get to

see your students so often. You teach through a camera, which

all were very new to all of us. And you started

getting yourself isolated from the rest of the world. Which

is good and bad. I didn't like that experience. I feel research

is important for science to be done collaboratively

with colleagues, with students. So I missed that part.

But on the other hand, it gave me the time to think. and

to really ask myself again where

I'm heading. So with this offer, which is

definitely non-trivial, it is supposed to

be bringing me to a place that I've never had previous

interaction and experience with, and also it's a job I've not

done before. So I consulted

some of my colleagues, mentors, and

friends. Professor Sir Michael Bradley, the former

acting president of the university, he

made me a very, very interesting case. He said, look, this

is more than just a university administrator. It

is about creating something from

scratch with a clean slate. And also, it

is in the region where no

such university ever existed. And the

country and the region needs such an institution to

really be the beacon of a new culture.

In a sense, I have the opportunity to to

influence and to shape a new type of culture, to

really help, you know, a

whole people, a whole country, you know, to

seek new directions, you know, or create new

results and also to redefine you

know, some of the decision process and the operation and

even culture, you know, in the region. So in a sense, there is

a chance for you to impact a country to

reshape the culture. So that's quite exciting to

me. I'm always surrounded by adventures in trying new things, and

this is definitely very new. Then I asked another friend

and mentor, Professor Michael Jordan, who has

been my advisor back in UC

Berkeley when I was doing my PhD. But then we kept the

friendship all the way. And he

also strongly suggested

to me to consider this opportunity more seriously. He

said, Hui, you are a professor

and a researcher with a good track

record of being productive and being creative on the

scientific part. But what could be

the next big stage? And that's what he asked,

right? Yeah, he understands that I want to

continue to do science, but bigger science also requires bigger

resources. And also, from a pure

career development standpoint, getting such

an offer at a relatively young age really actually

means that you have the

opportunity to experiment and also to try

different new ideas. In a sense, you have a chance to fail and

do it again, do it differently and so forth. So

he said this is a once in a life opportunity because

not every country are trying to build a university of

this and also not any time such

thing can happen. This is maybe just a once-in-a-life opportunity.

So these are the two strongest positive inputs

I can remember. There are other inputs as well that

are as positive. But of course, I also received a lot of warnings

and negative feedbacks. It is very

risky. It could ruin my career and reputation if I screw the job. or

I may run into a very different culture that I

just cannot understand and work with,

all these pragmatic issues and other things. So

I took a step back and I

took advantage of being in the height of the COVID with

very few people to talk to if I don't

actively seek out and this summer, in that thinking.

And in the end, I think I

will be living in my house probably

for the next God knows how many months and

years. I wouldn't be able to achieve too many

exciting things anyway. And this

particular opportunity is indeed once in a life, right?

And from a technical standpoint, I

started to see AI as a rising

sun on the horizon. the established

university infrastructure is already having

difficulties in coping with the new needs,

the new culture, and

the new evaluation and merit

metrics in this new discipline. But it is

very hard to change an established organization. So

even from a self-professional growth

and development point of view, it

is an opportunity to take your fate into your own hands and

also maybe create an environment that can help many of

our colleagues, students, and of

course the country and the people in

this region to a brand new set

up of such an institution. So I see that as a,

I start to appreciate that's becoming a huge opportunity that I can really

make a difference because I am an active researcher. and

it is not so common for an active

researcher and teacher to jump

into a university professor role. Usually, the

typical presidents you see are already somewhat

removed from the daily scientific life and

teaching life. Therefore, it

can be difficult for them to appreciate, you know, from

a close distance, the difficulties, the

needs, and all that, you know, in programming

and in organizing the university, right? So

all this converged to, you know, a

decision that, you know, I'm going to give it a try. And

then, you know, it took me a

year to make that decision. I actually felt

really appreciative that the university was

able to wait for me for that year. In fact, also the

Board of Trustees, they had the patience and also the trust

on me, you know, to onboard myself. I

remember it took me some effort, took

everybody a lot of effort, in fact, to even make the trip to here because all

the flights were unpredictable. And even getting

a COVID test certificate is also unpredictable.

But you need that to board a plane, which is a 24-hour fresh kind of certificate. a

lot of logistic challenges. So eventually I made

it to Abu Dhabi one day before the first

class of the students opened. I remember I got

myself prepared in a temporary

residence that I basically borrowed a

room from a very kind host, and

I delivered the opening speech of

that class, again, from a Zoom

meeting. Very

simple, not as fancy as today, the recording devices, which is coming

from my computer camera. So that's basically how the whole thing started.

At that time, the university was really, really very young and weak.

It has barely, I think, less

than a dozen faculty and only one

class of students, which was

very good. But at least I didn't have

any control or

knowledge about that because I was not admitted under

my kind of direction. So we literally walk

into the unknown and that basically

If you run a startup, you're going to want to pay attention to this. Startups, especially

AI startups, need good cloud providers. Lucky for you,

Google Cloud exists, and they offer an experience, the Google for Startups

Cloud Program that's tailored specifically to you. The program

offers eligible startups up to $200,000 in Google Cloud

credits for up to two years. For AI startups, that number is $350,000. Plus,

you get the mentorship and experience of Google Cloud's top-notch team. What

are you waiting for? Start building today with Google Cloud at cloud.google.com slash

startup. That's C-L-O-U-D dot G-O-O-G-L-E dot

Yeah, it's quite a journey. Now, about AI specifically,

right? And I know you've had an interesting journey with that. You started studying

molecular biology, and you kind of jumped into machine learning. You've

said of AI that it could usher in an age of empowerment. The

printing press ushered in the Age of Enlightenment, and

you've kind of equated the two as somewhat similar in how society-changing

AI could be. What does that Age of Empowerment look like? And

from where we're sitting now, what's the gap to get there? And

I'm glad that you asked this question because, you know, many people would ask

how to achieve artificial superintelligence and so on.

And then what is that, right? So, yeah, I think I

like to see any innovation and breakthrough from

a utility standpoint, what

function it delivers. and how such functionality

change people's life, change society, and

maybe even change civilization. So

it is in that context I call AI

to be equivalent to the

breakthrough leading to the age of enlightenment,

which is the printing press. There are many, many other inventions. The

invention of the steam engine, for example, discovery of

electricity, all these are huge. But I

think AI and

also printing press, all the invention of letters

and writing are at a different level compared

to all these technical innovations. Because if

you look at, if you put AI

and the printing press and maybe the

invention of paper and writing next to each other, they have something in

common. It is actually the backbone of human

civilization in their experience with

information. It's about how we

process and store and use

information in a

massively different scope of

magnitude. have

the technology of writing, use letters and characters

and papers. then

the information can be recorded rather than through

verbal communications. I

remember, is that Plato?

I think Plato is a student of Socrates. Socrates

didn't actually believe that knowledge, one should write

any book. He believed in in-person experience of

teaching. But it is

Plato who convinced him, or maybe who took

on the job of taking down all Socrates'

teaching and ideas into writing. And now we

appreciate how important that is, because that knowledge can

pass down to generations as it is. So in

a sense, it is a embodiment of

the antiquity time, information technology. But

at that time, you need to copy the book. And the

people who own the book need to be

literate and also need to have that kind of status, have

the opportunity to access to that. So knowledge belongs to a very few

people. Printing press is

a major advancement unlocking the knowledge from a few

people to everyone because it allows every

people to have their own copy of the book which began

from the Bible and they can form their own interpretations and

understanding and then after that people start to

think basically based on that from those knowledges and

then they write more books right so we have really

even the university evolving from a theoretical school to you

know study every topics Now, why

AI is directly related

to that train of evolution? If

you look at large language model, for example, it is trained not

on one book, It is not trained on

one sector of knowledge. It is literally trained on

everything ever written into language through

civilization and also in any media. In

a sense, you can imagine it is about turning all the

libraries in the world into one

device and then make it available next to your

fingertip. You can basically now easily get knowledge and

learn from it using this interface, which

makes the knowledge broadcast

and storage and accessibility to

a different level. Then AI also not only retrieves

knowledge for you, it actually started to be

able to use the knowledge to solve problems, as we can see. They can

at least answer many of your questions. And they can also

start producing, for example, code

and softwares. And nowadays, people

are trying to develop a lot

of applications in different sectors based on that, because

knowledge can be actionated to solve problems. And

in that sense, I call it an empowerment after

enlightenment. It is not about just giving you the

space to think. It is now giving you the tools to

make you a more powerful enabler

or actor in whatever occupation that you do.

And then, in a sense, leave you out space to

do maybe more challenging, intellectually

challenging things. And so that empowerment comes

together with even a next level of enlightenment

where, you know, everybody can do

very often time, you know, a very advanced level of

work without necessarily spending 20 years of

training, you know, in school. And then people should

not start rethinking, you know, what I'm going to learn now, you know,

in the in the next five years and what kind of problems I

should tackle. So we can refocus on problems

and maybe needs rather than just

focusing on repeating, executing

The difference to me between the large language models that

we're kind of dealing with today, right, and something like the printing press is,

you know, I think of all these technological innovations as levels of

automation, right? The printing press, as you said, got, instead

of having to handwrite books and pass them out, right, we could, we

have copies of stuff. it created more accessibility. The

jump to language models and

the kind of generative AI we're seeing now, to me, what you're starting

to do is you're starting to take the human out of the loop to a degree. Do

you see some sort of risk in that environment where maybe

for some people it might unlock them to explore greater problems? Is

there a concern that for other people it just acts as a crutch for human

intelligence, human creativity, because you have everything accessible,

I don't deny that there is a risk. In fact, any new technology you

know, comes together with both

opportunities and risk. Just like, you know, even

we talk about something that we are very familiar with,

you know, medicine, for example, you know, there are always a side effect that

we need to control. That's why FDA is formed. Genetic engineering, you

know, we use them now to dramatically raise the productivity of

crops. And we can engineer, you

know, into the crops, you know, or the

cattle's or the livestock's some nutritional deficiencies

and so forth. But you can also use

that technology to create very harmful viruses

or other forms of toxicities. So

it's really about how to mitigate and

how to regulate those risks. I think AI

is not an exception. But on the other hand,

also not a unique

threat that we didn't deal with before. In

the past, people figured out that there is a clear distinction between

the science, the technology, and the product, right?

And where you set the brick and

where you regulate is well

understood, that you probably should focus more

on regulating the users, on regulating the distributions, but

on the intellectual part, which is the science, and

maybe even the technology, which is the production part, you use

a different type of management. Typically,

for the science part, you don't even manage, or you'd manage very little, because

the creativity, once you shut it down, the cost

is too big to bear. In fact, many of the assumptions

about the current rush into AI

regulation discussion and even panic is

also based on an unclear understanding

about really the power of AI and how powerful

and how potentially dangerous it can be. It

is not well understood, especially by the general public.

Sometimes, even from a tactical standpoint, even

if there is a risk, how material that

risk is, and it actually had a big impact

on the risk and benefit calculations. So

I want to say a few words about really where we are in AI in

terms of its danger and its risk. Yeah, AI

is more than automation for sure, because AI can

automatically learn skills from training

materials. So it is a one-step beyond

just a programmed automation.

It does have the ability to acquire knowledge and to

become more and more powerful once

you plug them into some data sources and

also run an algorithm. But on the other hand, AI

also has a lot of limitations. It

is not a living creature that has self-identity,

self-drive, agency, free

will, and so forth. In fact, it is unclear to me at least where

all those will come from based on the current

architecture and our current mathematics of doing things. I

think down the road, new architectures beyond transformers, new

form of data beyond text and image will be feeding into

the next generation systems to create more

functions. But these functions actually are still quite

a few steps away from the type of higher

level intelligence or human-like intelligence I

just mentioned, like an agency, like identity, like a will, and

so forth. For example, how to even do more

complex reasoning, multiple steps reasoning. You want to

plan a shopping trip, you know, a travel, and

purchase a flight ticket, you know,

a plan on hotels. That requires multiple steps of

reasoning. I don't think right now the current

version of GPTs is able to do well on that job. Now,

talking maybe a slightly more challenging task. How to run

a store? How to run a company? How

to win a campaign, a battle?

These are reasonings which are typically a

hundred steps. conditioning on real-world

environments, and also conditioning on

embodied experiences from sensory and from anything. And

these are very difficult. In fact, you can probably already find

out that you cannot ask, for example, a GPT or AI

assistant to teach you how to swim, to teach you how to play the

music. Because why even different

music mentors, why

we have master classes? You know, those musicians, when they teach you,

it's not about reading from a book and say, hey, you apply

two grams of force onto a key and hold for

this many milliseconds and to produce a sound. It's

a very different experience of music teaching. And

I don't think right now our AI system is

even designed to do that. And certainly they cannot do that. So

I think the current AI system is still

on the steep learning curve to gain more capability, whether

it has the risk that people

are now, you know, fear

about. I think it's premature to say

that. Now people say disinformation, deepfake,

and all that, these are risks

not coming from AI technology. It is coming from

the users of the technology. It's just like a

gun. You definitely can cause a lot of harm by using

the gun. But is that the mistake of the gun?

People are still making better guns these days, and

for whatever specific purposes. But how

to regulate its usage is the key. So I

would like to remind policymakers and the

general public to really be

a little bit more concise and

careful about where the focus is

in such AI risk and AI regulation conversations.

The AI regulation conversation is very broad, right?

There are some instances that are trying to be very focused, such

as with deepfakes and job loss related to

deepfakes and the creative industries. There are others that

are trying to mitigate and prepare for these hypothetical risks

that we don't really know if they'll happen at all. And

And I want to add one more thing. In fact, even if you want to regulate

and counter that, you actually need AI to do it. Because like

the deepfake images and

the videos, the misinformation, it's just

too much for people or for a

manual devices or approach to discover

and to contain all of them. Unless you shut down the

computer, every computer in the world, they will be generated. And

As you just said, the problem with this technology and

regulating this technology is it requires

a global effort. If one country regulates really

hard, that's not going to stop China or other countries from

not regulating at all. And I think there's some concern about that.

the way you kind of framed the regulation question was to focus

Exactly. Yeah. That's why I think, for example, you

said about adversarial scenarios at certain countries

like China, will be doing what they need to do anyway,

and you cannot stop them. I think the best thing

that one can do is to advance

your own technology, in fact, to make AI even better.

Even we are under a material risk, we have

the means to take care of that, to counter that.

And do you think it would be an achievable thing? There

has been talk about the idea of international government, international

governance of AI that kind of supersedes any

individual efforts. You know, in the U.S. it's even more fragmented because

you have states doing things and then the federal government is trying to figure stuff

out. the EU is moving, and that'll impact where

companies deploy. But the idea of getting

every developer around the world from universities to

corporations to agree to a set of guidelines that maybe

Should it be feasible? I'm actually optimistic. It's absolutely

achievable. But on the other hand, I have to also

point out that if you look at the age of Enlightenment after

the printing press, it took roughly 100 years. before

all the warring states to eventually align

on some common principles and also to learn

how to live with this technology. In

West Europe, different religious sectors

fought to death just because a

different interpretation of a Bible principle. you

know, after all the people learn about it and have their own interpretations. But

eventually, you know, they learn how to settle things. And

also they learn how to live with a new society that

all can read and have many books and so forth. And

I can also name some counter examples where printing press

was actually banned. And after some hundred years,

those regions become much, much less developed. than

the West European countries. Why I mention this?

because I want to emphasize the importance of experimenting

and also the time. We are just barely five

years or maybe even three years into the age of

large-language models, right? So the AI era

has just started, and it is also evolving very fast.

We actually don't even see, you know, where it will platoon and

how a steady state would look like. And that's why I

think at this point, rush

into a premature set

of very stiff laws or regulations isn't

going to be very helpful because it may cause

missed opportunities and also maybe

an awkward misfit of all these

policies. with

respect to the real world, how they evolve. So I would say that

first, from a scientific standpoint,

we need to collect enough data points and

make careful observations and analysis. about

the opportunity and the risk, and

also the trajectory of the technological development. Thinking

ahead of the curve and be able to predict where

that technology is evolving, where I should put a

gate or a checkpoint down the road, rather

than passively you know, chase, you

know, a running kind of objects

and interfere with some of the, you know,

very organic and natural and creative and productive kind

of activities of scientific

advancements. The trick is just trying to figure out where it's going to go so

you can get ahead of it. Yeah, and it will take some time. But of

course, I'm optimistic that it wouldn't take 100 years. It wouldn't be as terrible

and as costly as the religious wars

Now, you talked a minute ago about we don't want to miss the

opportunity, right? And we've talked

about how impactful this technology can be. I've

been obsessed with this idea of trying to narrow down on

what the true promise of AI is, because like you said, I think a lot

of people misunderstand what the technology is itself. Part

of that is the fault of the kind of hype of corporations that

we've been seeing and social media is not helping. Part

of that is the fact that for a lot of subject

areas within AI, we don't have unified definitions, or people say

one thing when they're talking about something else. But

trying to narrow in on the idea of what is the promise of this technology and

how it can help people, because I think in

a lot of cases, people are afraid. People

are afraid about losing their jobs. People are afraid about existential

risk scenarios, which get kind of thrust

upon them from some of the companies developing this

technology. And it's set up as something that seems

to have so many, it's set up as something that's

dual use, but it seems to have so many negative risks that

I think for a lot of people, there is a question of why

develop this at all, if the risks can range across

this realm of severity. So to you,

Yeah, that has a close connection to really the

level of functionality that the AI will

be achieving step by step. And at

every functionality, you can accordingly kind

of define the cost and also

the opportunity. For example, Henry

Ford used to say that every American need to own a car. And

I think it's very bad news to the train

drivers and maybe to the carriage drivers. But

look, if every American needs to own a car, the number of

workers to produce those cars will be humongous. In fact, it

ends up creating a job. Now imagine that in our household,

we want to have a robotic maid to just take

care of all the household

businesses. Wouldn't that be an amazing product

that everybody wouldn't object to? I don't think right now people object to

washing machines and so on. So that's a

huge market. And the key technology to

get there is AI. You need to basically have a

robot that is able to plan, able to act,

be able to understand, communicate with you, and also

get things done. So we are actually, honestly, many

years away even from achieving that. But in achieving

that, I would imagine that everybody wants to be in

that business and join the production line and

make a lot of money out of it. Yeah, you will be losing, you know, of course,

you know, the clean ladies may be losing their job, but they will get a

better job elsewhere, because you will be joining the

production line in different steps of producing that amazing machine.

And also, you know, we talk about AI being

replacing some even white collar fancy jobs, like

a accountant, like a legal

worker. because we see HHBT are

passing the bar examinations, passing the medical examinations. Wouldn't

it be nice that it freed up these very smart people to

think about new practices in

those domains? So I think Sam

Altman, from his business standpoint, defined a

few tiers of AI. I

don't want to comment on whether those steps are too ambitious or

maybe less realistic. But

I think the way of thinking is very clear. It is aiming

for higher and higher functionality. And what

I can say from a technological standpoint, It

also will be dependent on

multiple generations of data, architecture,

and algorithmic evolutions. We're not there.

In fact, we don't have the tool right now to achieve even his second and third goals.

And within these second, third, and fifth goals, in fact, I can define

multiple concrete reasoning and

also acting tasks that requires

a lot of innovations. So at this university, in

fact, we were involved in some of this work, such as developing the

post-LLM models, including the word models,

that can even do a trivial thing of

simulating the word. If you cannot simulate the word, how

can you even teach a guy to pour a glass of water? That's actually not

written in the book. There are many other things that require

you to be able to simulate real-world experiences, and also generating

the next word based on your action on the real world, and

so on and so forth. And only in that way, you

can start to do thought experiments. You can do a new form of reasoning by

simulation, which could overcome some of

the limitations of the current language models. Then once you

have the word model, you can go one step further into

an agent model, where he has the word model as the brain,

but also he needs to have other mental devices, such

as perception, and

goal, and belief, and all these sort

of things. Each of them requires another

model architectures. So I can go on and speak maybe

for an hour about the technical steps that is required, even

to achieve what we probably can see as a very normal

and tedious level of intelligence. Then

I want to also say a few words about the other opportunities that is beyond

our daily life. which is the AI for science. Doing

scientific research is always a function of a technology environment.

Science requires more than brain, it requires the tool. Microscope,

telescope actually play the bigger role in advancing biology and

physics. AI is actually one such tool. Right now

we have, for example, in the space of medicine and

biology, there's a huge amount of genomic data, proteomic

data, and all sorts of orbit data that

is not meant even for people to analyze because they are too big and

too complex. Before the modern AI

of large language models and foundation models, we

can do very little in understanding and utilizing the

data. And now there is a new

movement, in fact, in computational biology of building foundation

models to distill in an unprecedented

way, you know, new form of knowledges and also predictability and

the simulation or capability of biological

systems. Imagine that. What if you can now

use an AI model to make proposals in

a very, very short amount of time of hundreds, if not thousands, of

drug design that can be applied to cure a

particular disease. And then you don't have to even apply the

drug to a real biological system

like a cell or a model organism. You can actually

also do the same in a word model of

the biology, which simulates all biological

phenomenons and the responses up to perturbation. You

can pretty much do this whole loop of innovation and

design and experimentation on a computer.

And that actually is now becoming a very, very exciting

goal for many of my colleagues, including students and

the professors in this university. Likewise, you can

draw the same experiences or expand the same experience in physics, in

agriculture, in many other sciences. So I think AI will

become, again, the modern day new microscope, which

tells you how to, helps you how to understand the complex data,

just like the classical microscope did. which is also about helping

you to understand information that you cannot see too small or

So you were talking about limitations of the current architecture. And

something else you mentioned that I want to tie into this as well is

that you refer to AI as our

new age microscope in the ways it'll

advance science. and not

as a species, right? And I think a lot of the source

of where the fear is coming from and where the hype is coming from, and I

think those two are very interrelated, is this language around

AI. And some researchers have taken issue with the

term AI at all because of this. as

the creation of a new species, as a species to supplant the

human species, as the kind of next thing.

And we are creating this being thing, replicable,

whatever that is. And it's vague if you try to pin it down, but the

way you're talking about it, with the limitations and the

focus on it as a specific tool, It's

very different. And so, you know, one, I

want to start with what those limitations are. You know,

we talk about hallucination, we talk about biases and

these kinds of inherent issues to the architecture. What's also

the importance of the language and how

Whether AI is a species and so on, these kinds of

discussions rarely happen among scientists

and engineers. I call them the consequence

of people living in different dimensions. Imagine

that there is a tribe somewhere in the Amazon jungle that

has never seen an aircraft or

maybe a car. and then all of a sudden they see

a person landing from

an aircraft. It is not it's

going to be surprising that they take that as a new species. In

fact, if we look at the ancient mosaics

and the pictures and sculptures in Egypt

and in other ancient civilizations, you actually see

a lot of such depictions of a god-like

figure or magical species, which,

I don't know, may actually represent a machinery. That's very

possible. So there is a key

distinction between technical and a

scientific discussion from a maybe

romantic and humanity discussions. For

example, this very word of intelligence. Intelligence is

not a very scientific design. I regret actually our

discipline is called artificial intelligence. because the other

fields are called physics and biology. It's a noun. Intelligence

is a noun, but it is derived from an adjunct. It's,

you are intelligent, and so on. It's a very subjective thing. For

example, imagine that I have in my hand a

calculator, but I didn't tell you. In fact,

especially for people who didn't know the existence of calculator, he would be amazed how

fast and how well I can do not just the trivial

calculations, but also square roots and the integrations and everything. I

can be called very intelligent, but it is a calculator. Once

you are in the same dimension, you see how it is made and

how it is supposed to function, you would be less impressed

and nervous. For example, large language

model. We built it. In fact, we are building it here as

well in MBCI. We are among the few universities, if not the only one in the world,

who actually pre-train large language model of our own. It

is a model. It is a language model doing

one thing, which is the next world prediction. So

in that function, many of the utilities somehow

can be encapsulated and dispatched. For

example, answering all your questions in

a certain scope. Next World

Prediction is like a linguistic simulator of

simulating how a person would

react to a prompt or a question through language.

It is literally a definable and a

measurable function. But human beings may

respond in a certain way. In fact, even between different human

beings, we do different. Some people recite some text. They

need to have a script to read from it. Some other

people improvise this. Some people are half-half. So

the approach of achieving that to a

receiver is actually less relevant. They draw on

their experiences. Now you sit on the end of the language

model, you see all your difficult questions, your difficult math

being answered. It doesn't prove that the machine actually understands

it, you know, or really, really digests it, or do it

the way a human would do. They can do it in a very different

way, and that way, by now, happens to be next-world prediction.

So that's why among scientists, among engineers, we

tend to be less amazed by this technology. And

we knew it is going to, for example, I'm not surprised at all that in one

day, in February soon, that a large-language model could

get a gold medal in the International Math Olympiad. Because

that type of reasoning, that type of problem-solving is

doable through a natural prediction model. As long as

you have a big enough brain to remember zillions

of words. you know, a prefix,

the sentence, and then the same amount next to that. But

I also know that there are some questions that they cannot answer, even through a

very, very simple interface. I've tried that. I reserved that

question list. If you are interested, I can send it to you. But I tested even

the latest strawberry. They still cannot answer. But

to a general public, they need education, I think. Right now, the

AI literacy is very, very different from

how much we know about algebra. Some people are very poor at algebra, but

they know what that is. They wouldn't be surprised to

see a calculator. But on the other hand, AI

literacy is much worse right now. When people see something amazing,

they literally get scared. I think there is a need to enhance

the education of AI. It's not about just math and programming. There

are other dimensions regarding AI utility, AI

measurements. and business and products that

people should know. Then they can have a much, much,

hopefully, authentic and realistic view about AI, even

without knowing the mathematics and programming. You don't need to be a mathematician or

That's the fascinating thing to me about AI is that it

is possibly one of the most multidisciplinary technologies that

I can think of. You have the technical aspects,

the computer science and the math, but there's also a lot of philosophy, human

psychology, neuroscience, linguistics, the

list kind of goes on. in terms of what you're talking about and the

core of what language models are and what they do,

which is a point that, you know, I appreciate you for bringing up because it's not

talked about often enough. And this kind

of race to intelligence and what intelligence is and what human intelligence is.

And the fascinating thing to me is the kind of stark differences between

language and intelligence and how these two

are not necessarily related. And

in interacting with these chatbots, right, I feel

like we, as a species, communicate our intelligence, our

thought, through language. This is our medium. This

is what sets us apart. And when we see something talking

to us in this medium of how we

convey our intelligence, it's easy for us to assume that it

That extrapolation is very natural for humans to make.

But even so, I guess it is not difficult

for us to realize that even if we don't speak,

we still function and we can still communicate. You have gestures,

you have many sensories that

can sense visual, audio,

temperature, all sorts of things. And not

all of this actually were described in language. In fact, language can

not be used to describe everything. There are many, many reasoning that

is existing outside of language. So

I would really draw a difference

between the functionality of communicating and

functioning linguistically versus

being intelligent. Because intelligence is, again, a very,

very subjective experience. It's a user experience. It's not a measurable

I think the fascinating thing about what we're seeing with

generative AI is that- There is one more thing, which

is oftentimes we call the soul. you

know, the spirit of a creature. I

have to say that I am very limited in my knowledge in understanding that

part. I don't know what that exactly is. Can we assume

that just by becoming knowledgeable enough, I

automatically become soulful? I'm not sure.

I think there's a difference between a machine and a biological creature.

And that difference is actually at least not well understood at this point.

I think the older generation philosophers had time to

think about it. They actually debated

on what is the status of human

reasoning and human morality and

will in the definition

of human versus machineries. I

think that discussion and that debate and thinking

should continue or restart because now we

have a new technology, which is AI. At that time,

they had this discussion because of the

latest invention, which is the calculus, the the

orbiting planets and so forth, which was fascinating. It may give you the

feeling that, wow, I'm like a god. I can predict the position

of the planets. I can predict the ellipse and so forth. And

then people start to think, oh, what is knowledge? What

is reasoning? What is conscious? So forth. I

think now it's time to rethink about that. Because with AI, which we

actually can make in a mechanical way using a

very, very simple training kind

of strategy, say, next world prediction, how

can we redefine what is ration and what

is mind and so forth? But I would

caution people from

making the jump of directly connecting

a very, very capable and seemingly

smart artificial system to

a human-like creature which has the

ego and also maybe

even the malicious intent to come after us. I think

there are a lot of steps, even philosophical ones and cognitive

ones, between them. And drawing an automatic extrapolation

is not very convincing, at least to a

The point you brought up about the soul, I find that

fascinating. And that's the kind of these differences

between knowledge and having been trained on every human work

ever written. the different facets that

make up what human intelligence is. And it goes beyond

knowing things. The stuff that makes humans human

I had one interesting debate with a colleague. For example, we

train generative AI, such as the Zara, for producing movies. Now,

let's imagine that the system watch a lot of Animal Kingdom

movies, like animals fight each other, eat

each other, and so forth, so that they can produce similar movies. Along

the way, are they going to be learning the will of

killing and of predator and prey

type of mindset? Well, nobody actually

knows, at least at this point. But it's a fascinating question to

Intention is a big thing here that's missing.

But there's also a lot that's not known about language models.

The real risk I do see in a potentially

out of control AI system is not about

that they kill people and so forth. It's about their

environmental footprint. Imagine that if

you do have an AI system that is told to

infinitely iterate on some very,

very difficult objectives that actually can be measured, but

they can go very, very big. What if the machine decides

to keep training themselves and burn a

heck of energy and so forth, and causing really environmental

disasters? To me, that's a real risk we have to be careful

about. There has to be checkpoints and the bricks, so

that at least a person should be allowed to turn it off. If

you want to build a system that is never having a switch, probably

no good. But these are something you

can already bear in your mind when you do the design. And

we could call it also a regulation in the science space. But

I don't want to overgeneralize that. It's not like

everywhere. It's only a very, very

definable and measurable risk that is indeed very,

very grave. But on the other hand, I think

Right. And a lot of these risks come from putting

these systems in a place where they can make decisions without human oversight. And

I guess the immediate and obvious remedy is don't do

that. Maintain human oversight. You won't have those problems.

Yeah, yeah. That's true. So I

think it's an ongoing discussion. But again, there is still

a difference between the scientific kind

of brain work, or maybe even

the technological experimental work from

the production and deployments and

the usage. I would imagine

that it's more effective and productive to focus on

the latter. Also, one more thing. I

think AI, compared to other sciences,

has the advantage that it is still one step

away from the physical world. Imagine that you are a biologist, you

do virology and genetic

engineering. Well, the products of

your experimentation is a physical being. If

you somehow smash the glass, or you somehow cause

a lab leak, it actually does enter the

environment. And they will stay there, or they could spread,

just like what we see in COVID. So these are very mature

risks. And therefore, it requires rather

stringent regulations and control. AI maybe

conceptually have that kind of same risk, but

it is fortunately still in a digital

and a virtual environment. In the worst case, you just

throw off your iPhones and shut down your machines. You go back

to the cage, the caves. We will still

survive, right? And without, you know, for

example, computer. Of course, I know it's very difficult these days, but I

mean, it's a different level of risk. That

existential risk is different from a biological disaster,

a chemical skill, a nuclear kind of explosion,

Part of the regulation question, and all

these topics are so passionately debated in the public space.

is the idea of open source versus a lot of what

we see from the companies, which is super, super closed source.

We're not even going to tell you what we trained on. That has to come

out through reports from investigative journalists. And

it trickles out and is fought over in court because of copyright concerns.

But the debate hinges on this idea

that science needs to be open. Science

needs to be explainable and replicable. And if

it's not open, it's not really science. And it's the distinction between product

and science. But there's also concerns about

what open source could mean for security risks. And you

were talking about chemical and biological situations

and how AI open source can maybe help bad

actors in those realms. So as

it relates to that and making sure that nothing bad happens, what

Well, I don't

want to reiterate the philosophy in science

that there is a fundamental value of share

and open and so forth. Definitely, I'm for that. More

than that, I actually don't think secrecy is

a recipe for security. I

think there's a fundamental issue. People thought by keeping things secret, you

become safer. I doubt it. I

don't think history gave us too many examples of that. Because

eventually, you will lose the secrecy. I think having

more people to understand the technology, to be able

to investigate and experiment creates

a much larger pool of people

and also a large pool of knowledge to contain

the risk if that risk happens. That's kind of, again, maybe

a more philosophical and abstract argument. But

I can give you a few examples. Right

now, we don't actually know exactly how

GPTs are trained. Do

you think that makes the technology safer or less safe? Well,

I don't know. I feel less comfortable

if I'm using something that I have

no knowledge from. Yes, I'm not going to break my car and

do it myself or understand how

it is made. But at least I know that there

are more than one manufacturer who knows how to do it. If

my car breaks down, like I have to drive a Mercedes, I don't have to go back

to Mercedes. I can go to any car shop to have it fixed. That

makes me feel safe. So under the same principle, I think important

technologies, I'm not even arguing a commercial reason. I think many of

the arguments was because of a commercial interest and other

geopolitical interest. But from a security standpoint, open

source has the advantage of

sharing the knowledge to more

people, and also keeping the

technology transparent for scrutiny, and therefore, for

a better opportunity to regulate it. So I think that

part is very important, even from a pure security

standpoint. Now, of course, there are even more

arguments about the economical and the scientific kind of

value from it. Economy is obvious. You don't want

a lucrative and important technology to be

monopolized by one of your companies. That's why in

the US, you have the antitrust laws and the anti-monopoly laws,

right? So if your company, Rockefeller, once

they own all the petroleum production, the

government orders it to break it. There is always an interest

of healthy competition among entities for

key technology and key products to actually better the

quality of technology itself and also to make it affordable and

accessible to people. So that's a commercial argument I want to make. Open

source gives startups, even

universities, a share of their impact

and also their ability to

innovate in this market. That's scientifically even

more important. I think no science actually gets

developed very well if it's kept in secret. So

for the sake of advancing the science of artificial

intelligence, especially in the United States, definitely the

U.S. is right now the current leader. I

think by having the open

source community very robustly

grounded in the United States, actually end up helping the

United States to be the leader for

the past many decades in IT technology. Look at, for

example, the development of

operating systems like Linux, the search engines.

and many of the novel IT innovations

is because there is this openness of

technology to be played and experimented and

advanced, not just by a designated player, but

by the entire academic and

industrial community at large. I think that capability and

freedom is nowhere to see, in fact, elsewhere

in the world, not even in Europe. In

fact, that's part of the reason I become excited to take

on this job in here, because I do like to

see such a mindset and such a value,

a principle in education, in technology, to

impact regions which are yet to be impacted. So

here is a young country that

wants to have a voice and have a stake in the global technological

advancements. And they have the

resource and also the ambition and also the enthusiasm of

being a leader. And absolutely, I think by

you know, moving the,

maybe I'm lacking the right word, by learning or

maybe by advocating and advancing, amplifying in that

kind of openness, that kind of transparency, you

know, from the very beginning of the foundation of this university and

of the talent we are training, I believe is

at the end of the day helping, you know, to grow this technology. And

that's also why in our university, yeah, we indeed take a very, very

open, but of course, compliant and

regulated approach to develop our AI.

You may have heard about, of course, everybody heard about Lama from Meta,

right? And it's a huge benefit

and support to

the community. And they benefit a lot, a lot of researchers working

on that and maybe inspired

a lot of new advancements. In here, we

have a project called LLM360, which actually went

even multiple steps further. We not only open-sourced the

model weights, we open-sourced the

checkpoints and also the code that

you use to produce the model. And our goal is

to let researchers to be able to reproduce the result,

good and bad, because otherwise you can only use

it and augment it. But you

still have this dependency of waiting for the next version, you

know, for you to rebuild everything from scratch. And

in our case, We want to make the community even bigger

with this higher degree of transparency. And I

think, does that give us any risk? Well,

you can never rule out the possibility of a bad player somewhere

in the world to take that and turn it into somewhat

a malicious tool. I

don't think, you know, that's going

to be stopped if we just close source it. People will

figure out from else places. But on the other hand, by

opening the technology in such a way,

we end up having more people understanding how

to augment and how to manage

and contain the risk from this machine and how to make

the machine better. So again, it's a debatable topic, but

I just want to share with you my view from a developer and

a researcher who makes these kind of technologies, I think

And we talked about this a little bit, but the thing that I think is important

in the realm of AI literacy and helping ordinary

people who don't spend their time reading

research papers or writing research papers about AI, What

are the biggest misconceptions that you see people

make about artificial intelligence in general

Well, all the things that we just discussed, like taking

AI as some sort of a creature that

could spin out of control from human hands, is definitely

one misperception. And

also exaggerating the power

of AI in real human

life, and also in scientific exploration, In

my opinion, it's also a misperception because AI

is very powerful, but

it is powerful in certain dimensions. For example, it is very powerful now

in playing the game of chess and Go. But it

doesn't mean that it plays other things well, like planning

your trips. So it takes time to get there. Another

misperception is that AI is

only about large language models and the GPTs. In

fact, AI is a very, very broad technology that

is more than actually we

expected to be coexisting with

us already. Our search engine is also an AI tool. And

I don't think people want to ban such engine. And

also, now everybody is driving

a car with a GPS or

with your iPhone next to you to

show you the road. I can tell you that I'm old enough that I live

in an age where I use a paper map to drive. It's painful. I don't want

to go back that age. People don't realize that that function is also

from AI. So AI is already everywhere. So

using the GPT as the

main or the only representation of AI is

itself a misperception. And therefore, the discussion around

AI sometimes is

very, very confused and ambiguous. Maybe we

should say that, how about we discuss large language models? That's actually

a lot more precise. and

better scoped. But when you say AI, literally

you are talking about how about we take a Google

map from your iPhone. That's a part of AI. And

who knows what will come along that line, the next product. I think

self-driving is one of the major things That

is coming very close to what

it's already realized. It's just like right now the policy is

lagging behind for its deployment. In

fact, I think a policy is more needed over

there in terms of deployment. Because if you look at the statistics, the

accidents committed by an autonomous driving car actually is

lower than people. And the

reason why it is not widely adopted is a policy and

infrastructure issue, not necessarily a technological issue.

And so I think there are a lot of examples in

such a space where especially decision makers

and policy makers will be benefiting from

having scientists and developers next to

them to be part of the discussion. I have to say

that I've been to events in various organizations

in the government bodies where a scientist

like myself could amount to

5% of the participants, or even less. So

in that environment, it is amplifying, in

We talk about people kind of trying to educate themselves better. And

like you were saying, it's really difficult. A

lot of this is ambiguous. A lot of the language changes depending on who you

talk to. What should people

be doing about AI right now? How

should they be kind of going about thinking

about it? I think there's a

degree that some people are afraid that it's too complex, too

out of reach to have an opinion on it or to think

about it too much. What would you tell them they

Good question. I

have to say that I don't want

to push the blame to the public just because they don't do

enough. I think the general public, even

including decision makers, policy makers, I

genuinely believe that they wanted to know more, but they don't know where

to learn and how to learn. So our educational system

should be the starting point that our current curriculum regarding

AI or computer science probably it's time to revisit

them. In fact, for the past, I

would say, several decades, if you look at the curriculum,

there hasn't been a fundamental change. It's literally just about how

much I teach along the same line. Which program

language? Okay, I can start it from Fortran in the 50s,

and then C, and then Java, and now Python, and so forth. But

it's still about teaching a different language. mathematics, well

there are new material coming in, let's retire the older ones

and bring in new ones. But yeah, you need to know linear algebra,

you need to know calculus, and so forth. So it's still the

very, very conventional way of thinking, and the statistics, for example,

are still using a year to teach regressions and so forth. With

now AI and with functions like

GPT or search engine and so forth, in

fact, we already are in the crisis where some people are using

GPTs to write scientific reviews of the submitted

publications. And people are getting very frustrated,

but you cannot stop it. It's happening, right? So then your attitude towards

even published papers may need to be augmented. Are

they really the product of the the researchers or

it's a mix and so forth. So I think this awareness of

the changing technology should be first taught to the public,

you know, in some curriculum. And that material does not require you

to know calculus and know linear algebra and know all this fancy

mathematical stuff. And in fact, it could be taught even

in elementary school and high school. So I think bringing

the AI into the background of

all teaching materials makes them not a specialty,

but a very, very horizontal, fundamental

science is probably where we want to begin from. Start early

and start very, very simple but broad to

create awareness about what we

already are using AI to do, and how

well they can do, and what else we set

forth for them to do. That kind of high-level knowledge

can be already taught to the general public

without any emphasis on technical specialty.

Then you can talk about the connections of

AI with all disciplines, not just AI

for science or AI making a product. For

example, artists are already using generative

AI to create visual arts,

music, and many other things. How

do we appreciate and interpret these

kinds of outcomes? and how can we redefine copyrights

and maybe authorship

and all that kind of things. I think this kind

of issues are already happening, but

very few people really put time

to think about it, or not even maybe knowing it. So

I think the AI literacy is really about awareness

of such an entity, such a

capability. We all know, for example, the capability of linear

algebra, or not linear algebra, just algebra. That's why

we are not afraid of a calculator, because we actually know algebra, even

though we don't know how to do algebra, but we know what is algebra. Right

now, we don't have such a literacy in AI. And

that's where, from the educational standpoint, I

would like to see universities and even

going further earlier, high schools, elementary schools, start

thinking about that. My son is in high school right now and

I sometimes look into

his curriculum and try to understand how

much their teachers and principals actually know

about all this. I think it's fun to keep an eye on it. Then

given that's happening, I have more to say to

the general public. Hey, go read

such material. Go watch a few informative

videos on YouTube and other places. But on

the other hand, go ignore certain kind of information that

you get from all different sources. There is definitely a

difference of good and bad information. And there

is also a judgment that you have to grow to be

able to tell what is really

valuable and authentic versus

what may be something suspicious. I think, again,

I'm not a fan of just

cleaning up the things to make it like a

hospital and a clean room. In fact, if

you look at evolution, both in

terms of a biological entity or even evolution

of our brain, You evolve and

you become healthier because you have adversaries. You

have all these viruses, you have all these pathogens, you

have actually all these misinformations from

your world. And you co-evolve with it. You

end up developing your own immune to be stronger. And

keeping you away from all this isn't going to make you healthier. In

fact, it makes you weaker. I don't think the politicians

and the general public appreciate that. They want to be as

safe as possible, away from all risk and all

kind of pathogens, psychological pathogens, you

know, or biological pathogens. I think it's going to weaken

our civilization. And then when you have an adversary, we

talk about China, we talk about some other countries, what if they don't do that? You

will eventually run into an adversary that is healthier than

you, and that will be even worse. So I would really

advocate a culture of openness and

also of brevity to embrace this

reality and then try to take on it. Thanks so much,