Welcome to Mainly AI — the show where artificial intelligence meets real-world impact. Join your hosts Dan Mitchell and Viby Jacob as they explore the latest breakthroughs in AI and other emerging technologies, and how they’re changing business, science, and your everyday life. From startup founders to academic minds, we bring in the experts shaping the future — and break it all down in a way that's smart, sharp, and seriously fun. There's nothing to fear. AI can help humanity in so many ways! It's an exciting time to be alive.
Viby Jacob: Hello.
Welcome back to mainly ai, where we
explore powerful ideas in mainstream
technology with clarity and curiosity.
Today we are diving into a bold
reframing of artificial intelligence.
So how this happened is we came across
this paper, um, which talks about.
AI as not magic, not some
sort of ascension overlo, but
more as a normal technology.
And word it right, it is normal.
So this is based on a very sharp, uh,
nuanced essay by Arvin ion, and, uh.
Kaur and it is the perspective reset
that, uh, you didn't know that you needed.
What it talks about is
like AI isn't magic.
It's not a guard, it's not a
demon, it's just a general purpose.
Technology just like electricity or
internet and the real power like.
In how we humans choose to use it.
So think about electricity.
Electricity didn't
transform homes overnight.
It took decades for that
transformation to happen.
So while AI might feel really fast
and a lot things happening today,
the impact, the societal impact, the
diffusion comes at a very small pace.
Dan Mitchell: Yeah, thanks for that intro.
This is a very interesting
publication, right?
And can totally see how people.
Because of the uniqueness of
it, how people would react.
Think about when of a sudden everybody
started using chat, GPT and, and magically
it, it could talk to you and you could
talk back and has continued to evolve.
and now people are worried,
oh, well what if this thing
is gonna completely take over?
Well, for those of us who have been
around long enough the internet was a
very similar experience, as we went from.
Dial up to always on internet.
And now we lean very heavily on it.
But there was strong concern that,
oh, this is gonna ruin everything.
Electricity had that same effect, right?
It's one of those turning points
in humanity really, people start
to get a little bit nervous.
But, AI isn't a magic mind.
It's like electricity.
It's powerful, it's
useful, but it's still.
Under human control, right?
When used correctly it can be
used for good, which is a key
element of publication, right?
Viby Jacob: One of the, one of the
things I've noticed is like all these
charts and graphs that show how much,
how many years it took for, you know,
adoption to happen compared to people.
PC compared to internet, compared to
electricity and you know, how many times,
years it, or months it took to reach
that whatever figure 1 million or so.
Right.
And there is one point that I feel
that like, you know, that's missed
for electricity, pc, internet.
There was an investment that
was required by the user, right?
Like buying a pc, getting a broadband
connection, getting electricity into your
home, into your factory and so forth.
And ai, in this case, like, you
know, it's practically free.
Yes, there are paid versions
of it, but it's free.
Therefore, like this tendency to say yes.
Therefore, the diffusion is high.
Not really.
I mean, like it's more like the.
Intensity of use.
How many hours do we, you know, spend
during a work week using AI tools?
Right?
So the usage is different from.
The key points that the authors
make in this paper, which I
found to be very interesting.
And, um, their point is that
like, you know, it's, it's,
it's not really something you
know, very different, et cetera.
We shouldn't sort of like
make those attributions.
We should treat it just like
any other technology and look
at the good and bad of it.
And then make sure like, you know,
we have the societal, institutional,
uh, forces, et cetera in place.
To progress, and they talk about
these three layers of AI progress.
Dan, like, it splits into what's called
methods, where this diffusion that, um,
that I mentioned is happening faster.
Like the, you know, GPT, the models, the
whole la you know, large language models.
New single discontinuous sort of like
process improvements that happen.
And the next set is like, what they
call is like applications, how you
actually turn these models into products.
That is on the slower side because there
are, there are things like, which you can
do fast, like general things versus things
that are, let's say like, you know, more
high consequence tasks where the adoption
or the sort of like the conversion from
a model into an actual application is.
It's really slow given all the
surround context that's required.
A lot of knowledge is tat and so
forth, and it's not easy to convert
something into a product right away.
And then finally, so sort of like comes
into actually how it spreads the societal.
You know, impact that happens, how it
seeps into industries and so forth.
And some of the analogies they gave
is like, you know, electricity as, um,
you know, once electricity came import,
it just didn't sort of like show up in
factory floors and things like that.
The, we have workflow redesigns
that needed to happen.
Factory floors needed to be
redesigned, production line setups
needed to be changed and so forth.
Um, and any, any, any sort of a technology
that's sort of like what's happening,
so therefore this adoption, the societal
diffusion sort of like, you know, will
be the slowest according to neuron
and Kaur and the actual integration
to industries will happen over time.
And we see some of those, like,
especially when we look into regulated
industries, high consequence tasks
as they call it, and so forth.
All right.
So that's what I would say about these.
Three key aspects of ai.
Dan Mitchell: So if we think about this,
because you're framing it very much
from a business perspective, right?
And there's a societal perspective to
There's gonna be regulation that will come
and while we haven't seen a ton of it yet.
There'll be regulation for that.
Just like anything else.
We saw it with the internet, we
saw it with even electricity.
And then there's gonna be
a cultural aspect of it.
Is it widely accepted?
Will, is it okay to use AI even.
In, academic situations where in the
outset people will look at AI and say, you
are cheating on your schoolwork if you're
using it for research, because maybe you
should be using books or the internet.
That's a very traditional
way of thinking and now it's
becoming more widely accepted.
You look at some of the work that UT
Austin is doing around the acceptance
of AI as a primary research source.
schoolwork is very forward thinking.
And so there's a culture
that's being built there.
And then the last part we
talk about infrastructure.
If you watch the show 1923, you'd
see the resistance of adoption of
electricity when they started to
bring it out west into Montana and.
As it becomes available or more widely
available and for a reasonable cost, I
think that also helps drive adoption.
So, you save free, but if we look
at even a Chat GPT subscription, the
free version is not the paid version.
version, the plus version
is not the pro version.
just like anything else, the
adoption the barrier to entry
is almost always cost to start.
But then is it useful, right?
Because just like electricity, if
you don't have a bunch of electric
appliances, it doesn't make a ton
of sense for you to invest to put
electricity into your farmhouse, right?
Viby Jacob: right.
Dan Mitchell: So.
But just because we can build
it doesn't mean society will
adopt it overnight, though.
Maybe with the that'll change.
Viby Jacob: Exactly.
Exactly.
And then that, that's one of the
points that they make, right?
That there is a lot of transformation
in the surround industries that need
to happen along with it to kind of like
have this diffusion in the traditional
way that we talk about like for other.
General purpose technologies
that have happened in the past.
Right?
So their caution is more, let's say
like, you know, against this notion
of a super intelligence, right?
Super intelligence, they call it, as
they use the term incoherent, right?
What they're saying is like, okay,
let's not sort of like get into this.
Notion of like an AI doomsday, and we
should more think about like misuse,
um, misalignment, biases, creeping
in, you know, poor institutional
frameworks that exist and so forth.
Um, and, um, and naturally like
lopsided sort of like, uh, dynamics
that exist, um, within different,
um, institutions, power and so forth.
And how do we sort of lighten.
It's not so much as like, you know,
they're leaning towards the utopian or
the dystopian sort of like way they're
saying like, yes, AI is useful, AI
is providing value, but we need to
it will evolve in the way that we set
up these controls around it as such.
So that's one of the key ideas that's
there in this particular paper.
Dan Mitchell: But isn't there,
there's an element of trust there,
anytime you start to use something
new there's an element of trust,
especially when the stakes are high.
As we've seen AI can tend to
hallucinate and while it's gotten
better over time you think about other
high stakes things that have been
rolled out self-driving cars right
now, you're putting your life on the
Viby Jacob: Right,
Dan Mitchell: That,
sure, you have a pretty good shot of
getting where you're trying to go,
but at the same time, there's not
a 0% chance that you could crash.
There's always.
A chance that something could happen.
You think about if you go all the way
back to during the early space race people
were doing math by hand, and now you've
Computers that took
over these calculations.
They didn't.
They didn't trust them a hundred percent.
They used to spot check the work.
And then over time, because there was
some urgency there, they started to
build more trust toward it because they
figured out that, Hey, it's science,
it's math, it's pretty straightforward.
And we should start to trust these systems
even with the stakes being what they were,
Viby Jacob: Yep.
I, I fully agree.
Regarding that self-driving
example that you just gave.
Uh, which is referenced in the paper by
the way as talking about, um, you know,
safety regulations and things like that
that need to be there in the autonomous
driving and how that actually drives the
success of some of these companies, right?
They talk, give the example of cruise.
Um, you know, which got
shut down due to safety.
Um, not being able to
meet the safety standards.
As well as sort of like, you know,
Uber as well as Waymo, the difference
in the success across these different
companies Related to that, uh, one
point, one of the interviews that I
watched of Arvind, and they actually
talk about the fact that like, when.
Automobile industry sort of like
came into being, they weren't
really thinking about safety either.
So a lot of new technologies when they
start out, they're not really thinking
about the safety and they evolve over
time, uh, due to accidents as well
as irreversible mistakes from irre,
irreversible decisions as such that, um,
led to sort of like point of failure or
let's say like in a loss of reputation.
PR failures and so forth, and
that brought about the notion that
the car manufacturer is in charge
of the safety of the car, right?
Like that is, that came
about over a period of time.
It was not there.
Like, you know, right
at the advent of that.
Automobile technology as such.
So similar to that, their point is like,
you know, as AI evolves, um, the companies
who are sort of like building it, they do
have a responsibility in terms of like,
you know, building the right safeguards
and guardrails and things like that.
But of course, they reemphasize the
fact that human control will remain,
um, you know, absolutely critical
in terms of like these, um, ai, AI
products and deployments and so forth.
Dan Mitchell: sure.
If you think about it, there's risks with
everything intended, unintended, human
control is an important element of it.
And for anyone who's watching the video
we've been flashing on the screen some
of the quotes from the publication.
Just to, so folks can follow along, but,
so as we think about, all right, we, we
need to maintain control, then we start
to talk about, well, what are the risks?
Accidents being one of 'em, misuse.
Any technology could be misused.
We've seen it with bad actors
who are using voice cloning.
Technology or spoofing moving too fast.
There's always an inherent risk.
we've seen that in a
bunch of different ways.
Some of those in safer
environments than others.
And then sometimes things
just don't line up like the.
The intent of what you've
created goes awry, right?
It turns into something else.
You go back to history, and I
always drag it back to history.
you think about Tesla not Tesla, the
company, but Tesla, the man these
visionary ideas for electricity, right?
He was going to broadcast
electricity through the air.
idea.
Didn't fully understand what that meant.
It became very dangerous.
You look at Marie Curie, right?
Breakthroughs with radiation
had no idea that she was giving
herself radiation poisoning all
Viby Jacob: Right.
Dan Mitchell: So, like radiation or early
electricity, the biggest dangers are gonna
be the ones that we don't see coming.
Viby Jacob: right.
I think they, they make the argument
that like, yes, if you, if you look at
most of these, um, most of these risks
and then sort of like you understand
the risks, but you're not sort of
like using the technology in high.
Consequential sort of like tasks, let's
say like crime prediction and life
impacting, sort of like decisions, right?
So, um, it's not so much as like, you
know, you want to kind of like talk about
like AI domination, world domination
or you know, the other scenarios.
Killer robots as such.
Right.
So you wanna sort of like make sure that
some of these risks are taken into account
and then therefore, like, you know,
there is a need to look at what you, what
should you do in terms of like, you know,
you understand these different things.
You understand, um, how it is more of
let's say, like an augmentation rather
than a substitution in terms of like,
that was one of the other points that
I really liked in that paper where they
said that look, um, most of the tasks.
Will kind of like evolve into interpreting
AI as well as sort of like managing ai.
It's, um, it's just the same way that when
other technologies came about, there was a
redesign of jobs as such, not necessarily
like, sort of like just a you know, a
disappearance of things overnight as such.
Right.
So it was more of pre definition,
not erasure as was one of the common,
one of the key things that they
mentioned, um, in the, in the paper.
In terms of like.
Given these risks as well as like
when we are looking at how things are
evolving, the natural question would
be, what should we do given these
risks as well as the way AI is evolving
in the general realm, not so much as
like the very domain specific realm.
Um, what should we do?
Should we be looking at
what is the prescription?
Dan Mitchell: Before.
Before that.
The first thing we should do.
And I know that Sam Altman says it's
costing a lot of money, but I do always
say Please, and thank you to the LLMs
because those killer robots you talked
about, just on the outside chance that
they come looking for us, I want 'em
to spot me and be like, Hey, that's the
guy that said please, and thank you.
He's good.
Let him go.
Hopefully they'll cut me a break there.
But just, quickly on the kind of
evolution of jobs and what that means.
Going back to that other example
with the space race and what we
literally had human computers.
That was their job title,
was computer, right?
And so as they went down the path of
using these computers more and more.
it's just the computers took
over the calculations, but
the people didn't disappear.
you
Viby Jacob: Exactly.
Dan Mitchell: Hidden Figures, right?
They became supervisors, decision
makers, system designers.
They embraced the technology,
Viby Jacob: Right.
Dan Mitchell: there's an opportunity
here where you have basically the concept
of the manager of AI agents is one
that's been put out, but it's
gonna continue to evolve at the
pace that things are moving.
So,
Viby Jacob: Absolutely.
Absolutely.
Dan Mitchell: but
Viby Jacob: So
Dan Mitchell: should we do?
What's your take on this?
What should we do?
I'm, there's the article
says, but what should we do?
Viby Jacob: I think that, uh,
you know, if we look at I, I
think like I look at AI as.
Not a miracle.
Not a monster.
More of a mirror, right?
Like, so what we put in is what we get.
So, um, even, even if you
look at like the models.
The models are more interpreting based
on the data that's within the corpus
of data that it's trained on, right?
So it's more of a reflection
rather than anything else, right?
So us making sure that the right,
um, you know, guardrails are
set, the right values, et cetera.
And also understanding the limitations
of AI as it progresses, right?
What it can and cannot do, and not
sort of like, you know, falsely.
Or simplistically extrapolating it
to do everything that a human can do.
That is basically the where I kind
of like think, um, in terms of like
what we should, will do, what we
should be mindful about, right?
Uh, the race to a GI and everything is
progressing, and yes, there will be more
developments in those spaces, et cetera,
but as things move along, we should just
be mindful of the limitations and there's
no one clear benchmark or anything that
can actually provide that value, right?
Benchmarks.
Suffer from its own set of limitations.
Therefore it's more being sort of
like making sure that we have the
right policies the usage of ai, not
just sort of like tightly regulated
enough so that like, you know, the
progress can be continued to make,
but um, the values and the structures
of the world are reflected within it.
Dan Mitchell: Yeah.
Yeah.
No I think that's right.
Also making sure we're ready for it.
Plan, plan for the worst
and hope for the best.
I think putting bands on these
things is a terrible idea.
We need to just stay very
close to 'em, study them.
I wish that we had studied the effects
more closely when social media was rolling
out, but everyone was so busy scrolling
it that they didn't bother studying it.
So, here we are now.
Viby Jacob: Right.
Dan Mitchell: but to your point it's not
the solution's not to slam the brakes.
It's more to add better headlights so
we can see where we're going with it.
And one of my favorite tech people in
the world, has been attributed of saying
something along the lines of technology
is about enabling human potential.
That's Michael
And he's right.
The idea that technology is an enabler,
not a replacement, and not meant to cause
harm or, aI should serve as the ability to
enhance human potential, not replace it.
That's my take on
So,
Viby Jacob: absolutely fully agree.
I don't think it's about, go ahead.
Dan Mitchell: I was gonna
say any last thoughts on it?
You.
Viby Jacob: I think it's not about fear.
The whole, uh, thing about the
paper is like, it's, the call to
action is like, it's not about fear.
It's more about.
Stewardship, right?
Active stewardship that needs to happen.
And, um, you know, this reframing, if
it causes us to pause and think, uh,
in terms of like how AI will play out,
I think, uh, that is a major, uh, you
know, uh, difference in this, in the
world of like, where we are kind of like
talking about, always talking about like.
Tech determinism, right?
And sort of like this wild, sort of
like two sectors or two domains of
thinking, one wildly optimistic and
the other sort of like the doomsday.
So I, I found the paper to be extremely
refreshing and I would reiterate the fact
that like, you know, it's not a miracle.
It's not a monster, it's a mirror.
It mirrors whatever.
It reflects back what the values and,
um, you know, structures that we put in.
So.
Dan Mitchell: I like that framing a lot.
That makes a lot of sense.
So, so with that, I
think we will wrap it up.
we will put the links to the publication
in the in the page for the podcast.
Thanks for joining us today on mainly ai.
Be sure to like our page follow
and all that fun sort of stuff.
And if you're interested in joining
us for a conversation or have
any suggestions, drop us a line.
Viby Jacob: If this reframe.
Streaming sparked something new for you.
Share it.
Uh, you know, rate it mess.
We would love to hear about like
how you are thinking about AI
differently now, now that you're
kind of like aware of this paper.
Go read it.
It's a very long paper, but 15,000 words.
Dan Mitchell: Yeah, it's
gonna take a minute.
It took me a bit.
All right.
Okay.
Thanks everyone.
Take care.
Viby Jacob: Take care.