The NeuralPod AI Podcast
The NeuralPod is all about deep diving into technical machine learning topics and showing its real-world impact, bridging the gap between AI-first companies and adoption.
We chat with subject matter experts across reinforcement learning, recommender systems, deep learning and generative AI. As well as chatting with business leaders, VCs, operations and strategy experts.
Who are NeuralRec.ai? NeuralRec is a recruitment staffing agency. We build niche ML teams and represent some of the globe's best ML talent.
Keith Dear: Like I was looking over
the edge of a precipice, like, oh my
God, this thing that I've been talking
about now for nearly 10 years happening.
It's happening now.
And it, and I knew that
intellectually, but to feel it
viscerally is very different.
There's a tidal wave of change
coming and I, I, I wanna be in
the best possible position to
surf it and not be crushed by it.
And
Chris: Welcome back to the Neuro Pod.
I'm joined today by Keith.
Dear, welcome to the show, Keith.
Keith Dear: much.
Chris: So it's a real pleasure,
um, for you to join us today.
I know we have several mutual
connections, um, over the years and
recently introduced, but um, yeah,
everyone speaks so highly of you, so
it's a real pleasure to chat today.
Um, would you just like to start and
give everyone a, a quick high level
introduction to, um, you know, your career
, Keith Dear: Um, so very briefly,
um, I, um, I was an 18 year
intelligence officer in the Air Force.
Spent some time in
number 10 as an advisor.
I, uh, spent some time at Fujitsu.
Now I'm CEO and founder of, uh, Cassie,
um, and AI super strategy startup.
Chris: So would you like to give people
just a, uh, an introduction to Cassie
and kind of what you've been working on?
Keith Dear: Uh, yeah, uh, very
happy to, and thank you for,
um, inviting me on the podcast.
It's good to be here.
Um, Cassie, we describe Cassie
as a super strategy engine.
From super forecasting, we do
probably probabilistically optimized,
super forecasted decisions.
What that means is you tell us what
it is you wanna achieve, you your
probability or your odds of achieving it.
We tell you the factors, the most
important things you need to achieve
in order to make that thing happen.
We tell you, tell you your probability
of achieving those factors.
So how likely is x, uh, to happen
and how much would it change?
Y Um, we do all of that
continuously, so we are forecasting
against changing context.
If there's new information, then we
can usually prospectively and certainly
quickly update the forecast to show you
how, uh, it changes your odds of success.
Um, what that means is we're much
more than just a a prediction engine.
We're a strategy engine, a
tactics engine, a decision engine.
Um, and we, we genuinely think we'll
completely transform how individuals
and organizations make decisions.
Chris: That's super interesting.
And what, what do you think
specifically will help, um,
organizations make bit better
decisions with what you're developing?
Keith Dear: Well, it helps
in lots and lots of ways.
Like one of the first ones is
it, it forces organizations to
commit to a resolvable outcome.
Like what is it you want to achieve?
How would you know you'd achieved it?
Um, and if you look back at like
an awful lot of the lit business
literature to a lesser extent true
of military diplomatics and, and,
um, geopolitics objectives as well.
Um, there's a whole series of papers
from Harvard Business Review talking
about how most of your strategies aren't
strategies, and they're not strategies.
'cause people never really say what it
is they wanna achieve with any clarity.
So we force that rigor because if
you, if you can't do that, then you
can't work out what the most important
factors are, then we force you to
be, we call it legibility of beliefs.
Like really explicit saying, well if you,
if you wanna achieve this thing, like
what are the things that you think will
most influence your achievement of it?
Too often that gets kind of
fudged and fuzzy as well.
Uh, so we force you to be
explicit and therefore we give
you legibility of beliefs.
Now you can see, okay, I, Chris,
I believe these things will
most influence this outcome.
Um, and, and so, so you've already
got two things you probably don't
have as explicitly and clearly
as you would normally have.
Then you have probabilities
on those things.
So you, you are forced to confront
like, well, okay, how likely
am I to actually achieve this?
You know, we called Cassie 'cause we
tell you what you need to know, not
necessarily what you want to hear.
Um, we use both AI and crowdsourcing.
So the crowdsourcing might be like from
your C-suite or your colleagues and peers.
Now you can see like, well, there
are differences of opinions and
explicitly why are there differences
of opinions on the likelihood of
achieving this thing that I care about?
Which just enables much more
principled, rigorous, and productive
discussions around how you're gonna
go about achieving something than
I think is, is possible today.
then we use AI to do exactly the same
thing, to generate like the factors
and the probabilities and, uh, we can
do all of that 50% better than humans.
So we are 75% accurate using AI to humans,
50% accurate or 50% better than humans.
And.
That in itself is pro profoundly
important, but I always start
with the collective intelligence
explanation because actually what
we do is incredibly valuable to
organizations through harnessing the
kind of science of decision making.
Even if there were no ai.
Chris: Sounds super interesting and we'll,
we'll go into a little bit more detail
on that today, but just to, to let people
know listening, um, we're gonna go into a
bit more of your background and recently
just got back from, from China as well,
which is, um, you know, interesting.
We're gonna talk about that a little bit.
Um, you know, I think a GI and
a SI, uh, kind of definitions
up for debate at the moment.
So we'll kind of go through that
a little bit more detail about ca
Cassie and, um, your strategy and
kind of in innovation background,
which is, um, yeah, all exciting.
But, um, do you want to just, I know,
give you a little bit of an introduction
at the start, but do you want to give
people a bit more of a detailed overview
of how you've got to Cassie today?
I think your background's,
um, again, super interesting
and be great to hear about.
Keith Dear: very happy to.
So, um, my, uh, well, I was an 18
year intelligence officer in the
Royal Air Force, um, with multiple
kind of operational tours in various
interesting parts of the world.
Um, most relevant to this
conversation though I did my, um.
My Masters in terrorism, counter
Terrorism at Kings, uh, in 2011.
And in 2015, the US Air Force and the
Royal Air Force co-sponsored me going to
Oxford to study experimental psychology.
my thesis there was on the effect of
surveillance on behavior, but actually
I spent as much time writing, thinking,
and talking at the intersection of
psychology and ai, um, human and
machines relative strengths and
limitations went on for, to continue
to publish and write in that area.
And then when I went back to defense on
completion of the PhD, uh, ran various
projects in and around the same domain
whilst continuing to write, speak,
and advocate for the rapid adoption
of ai, um, against some of those
timelines that you are talking about.
Um, and then from there,
uh, join number 10.
In 2020, I was asked to join, um, by
John ue, who's the, um, prime Minister's,
uh, foreign lead foreign policy
advisor at the time, um, where I was
an expert advisor on science and tech.
In the contra cons context of our
rewrite of, um, the UK's international
strategy, the integrated review from
2020 to 21, I left in December, 2020.
Joined Fujitsu led their Center
for Cognitive Technologies
as a 22 million pound center.
We covered everything from like
quantum computing, uh, deep check
research on neuros, symbolic AI through
fusion, digital twin, uh, investing
in incubation and new companies.
Um, and then a, a kind of fusion cell
of like, okay, well, how do we actually
make all of the outputs from these
technologies legible to humans in a way
that best enables their decision making?
Um, and then I left there in, uh, March
of this year, um, to set up Cassie,
um, which we've already talked about.
So that's, that's me.
Chris: Hmm.
Nice.
And, um, yeah, thank,
thanks so much Keith.
Um, really great background and, um,
like I say, excited to speak to you.
Um, what, what do you think the aha
moment was for Cassie, by the way?
Um, before we, we get into
the meat of the podcast?
Keith Dear: ways to answer
that as a kind of business,
um, and practical perspective.
And then there's the, the
intellectual development of the ideas.
I think, uh, my co-founder, John
likes to say that you can trace them
back to paper that I wrote and we
talked about when I first met him.
Uh, so I wrote the paper in 2011
during that masters that I mentioned,
uh, called Beheading the Hydra.
Um, and at the time in various
ways I'd been involved in the UK's,
um, leadership targeting campaign
against the Taliban in Afghanistan.
So the capture or or killing of terrorist
and insurgent leaders as a name with the
aim of kind of degrading the insurgency.
And I, what I wanted to understand
was, okay, but like what is it, what
is it we're trying to achieve and
how do we know we'd achieved it?
So you can see instantly there like
the, the kind of structure of Cassie
today in a paper that I wrote.
Hmm.
Terrifyingly over 15 years ago.
Um, so, so you could go back to that to
say, well, look, thinking clearly about
how would you know if you'd achieved
this thing and like, what are the things
that would most in influential your
achievement of it was there in that paper.
And I, I think as a defensible claim,
not mine originally John's, but when
he said it, I was like, yeah, there's,
that's, that's, I think that's true.
Certainly understanding, um, Isaiah
Berlin and then Philip Tetlock's work.
So Isaiah Berlin's work on hedgehogs and
foxes and, uh, how we make decisions.
and then later, um, Philip Tetlock's
expert political judgment and
super forecasting, which hopefully
we'll talk a bit more about.
But, um, aside advocating for those
methods and methodologies in defense, and
John, my co-founder, was doing exactly
the same thing, um, by 2015 now, kind of
frustrations with the slow adoption of
what we continue to regard as forecasting
best practice and some elements of
which underpin what Cassie does.
Um, had bubbled up to the point
where we published in Air Power
Review, a paper still available
online called Assessing Assessments.
Um, and that looked in detail at things
like the terrorism threat level, which,
um, every year since 2007 has been,
um, I think it's either severe or
critical, something like that, which
in the probabilistic language you're
supposed to use means there's been
a 75% chance of a terrorist attack.
Um, and yet that can't be true, right?
There hasn't been a terrorist
attack on 75% of the days.
Since 2007.
So, um, depending on how you define
it precisely is it's very clear that
that isn't an accurate reflection
of the actual day-to-day probability
of a terrorist attack taking place.
And therefore there's a
real problem with that.
So, so there's kind of, um, because
it means people might like game the
predictions or ignore the predictions, or
maybe they take 'em super seriously, but
now, um, their resource allocation is, uh,
out of proportion to the actual threat.
So we wrote about that in 2015.
Um, at Fujitsu we did some work,
uh, looking at how you could score
questions, um, like can you score
questions in the way that we know
that you can score forecasts?
Uh, and that definitely, um,
has helped sort of develop
the thinking, uh, around this.
But, um, in the end when we set
Cassie up, we kind of started
again and going, okay, let, given
everything we know, would we build a
strategy engine, um, that maximally
reduces uncertainty around outcomes.
And that's kind of the, the intellectual
heritage of, of Cassie, I suppose.
Um.
Then if you'll forgive me, that the,
the business side of like, well, what
was the aha moment was last summer
reading Nick Bostrom's book, uh,
deep, I think it's called Deep Utopia.
That's right.
Uh, life in a Salt World,
something like that.
It's, it's a, it's, uh, with apologies
to Nick, a profoundly odd book, uh,
written in a really idiosyncratic
way, uh, with some students attending
Nick's lectures and then like
reflecting and talking about them.
Uh, but it's kind of, it is kind of fun.
Um, but it's the opposite
end of the spectrum.
So, so back in, I think 2014, Nick Bostrom
wrote a book called Super Intelligence
was like, well, if we develop a super
intelligent ai, maybe it's gonna kill us
all either through malice or misalignment.
In this book published a couple of years
ago, he was like, well, what if we get
it all right And we like solve cancer and
cure everything and make everything easy
because AI is so much smarter than us,
and honestly it's in many ways it's just
as it's dystopian and just as terrifying.
I remember reading it in the garden here.
Birds singing in the trees, roses
growing in the garden, on my palms.
Like I was looking over the edge of
a precipice, like, oh my God, this
thing that I've been talking about
now for nearly 10 years happening.
It's happening now.
And it, and I knew that
intellectually, but to feel it
viscerally is very different.
Very different.
And I thought, I'm not sure it makes
much sense to continue to kind of
sit in a, in a, a very good job
in an organization that I enjoyed
working for, but a large corporate
where necessarily things move slowly.
and I, there's a tidal wave of
change coming and I, I, I wanna be
in the best possible position to
surf it and not be crushed by it.
And so that's why I launched Cassie.
Chris: Nice.
Um, su super interesting.
And what, what do you think the,
the best utopia, uh, could be?
And what, why did that scare you?
Keith Dear: Well, I, I, nick.
He's talking about, um, when he
talks about life in a solved world,
he's like, okay, yeah, well, like we
might have, um, extreme longevity.
So hugely increased not just lifespans,
but health spans as we cure diseases
using AI and, uh, all as, all the kind
of, um, that frustrate us or take up
our time around the home or whatever.
And now done by super intelligent
robotic machines, right?
So you, you never have to
do the gardening again.
Uh, you never have to
do the laundry again.
You never have to like, whatever
those, those things are that
you, you'd rather not do.
The point is okay, but like if you
knew, if you knew that a machine
could do all these things better than
you, where would you find pleasure?
You know, say, you say, I
really like going shopping.
Yet, you know, you could just click
on the web, get your super intelligent
AI to go and find what it is you
wanted, and you say, oh, no, no.
But I like discovering things that
other people wouldn't have discovered.
Well, then you'd say, okay, I
want something unique that's
gonna make a statement.
Like, where's the pleasure now in
going shopping, knowing that you've
bought something slightly worse?
Where's the pleasure in doing the weeding
if you know that your robot would do
it a hundred times better than, I don't
think anyone enjoys weeding, but where's
the pleasure in gardening if, you know,
the, the machine would've done it better.
Where's the, where's the pleasure in?
Like, all of these things.
and he just goes through
it sort of one by one.
And, and, and I won't, I won't push it too
far for the podcast, but it's, it's well
worth reading going into all these things.
Like, does it, would it
actually be a better world?
And I, I, well, I'm not, I'm not
sure that it would plus it, it
makes the, the things that I really
worry about, like the profound risk
of a huge number, if not all jobs.
mean, definitionally, all
jobs are redundant if we get
to a super intelligent ai.
Um, and like that would be such
a profound political shock.
Um, and one I don't think
we're in any way prepared for.
So there's a whole raft
of things there, right?
Um, but the, yeah, the meaning and
then the economic consequences.
What does, what does meaning mean in a
solved world the economic consequences?
it was more than that really, Chris.
It was, it, it was the feeling
something viscerally that I've known
intellectually for a long time that
that kind of propelled me to go, do
you know what I'm, I've gotta do this.
Chris: I think it's, um, you know,
I speak with friends who potentially
are, aren't in the machine learning
and, and AI space and maybe not as
initiated as, as you and other people
that are working in the industry daily.
But it is a, it is a concern.
And, you know, my, my partner's a
lawyer and she, she worries about it.
Um, you know, how AI's
gonna affect that industry.
And, um, you know, the irony of, um,
you know, developing these programs
is it could replace the, the engineers
and machine learning engineers
first and a lot of connections in,
in, um, my network al also worried.
So yeah, de definitely, um, at the
front and center of people's minds.
So, um, yeah.
Let, let's move on to China.
I know, um.
You recently visited there a couple weeks
back and you know, it's a state that is
quite guided at times in terms of, um, you
know, what they put out into the media.
And, uh, yeah, a great opportunity for,
for us to see behind the, um, the curtain.
Um, you know, from yourself who's visited.
So what, what, what were your initial
thoughts on, on visiting China?
I guess.
Keith Dear: the great
firewall, you might say.
Um, so, so I, I mean, let me explain
why I was there, first of all.
So I'm, I'm part of something,
uh, called the Roundtable for AI,
security and Ethics, uh, that's
run by the United Nations Institute
for dis settlement research.
Unity, that's, so it's unity as race.
Um, and they've held round tables
now in Italy and in South Korea
with, um, individuals that are
invited from academia, business,
uh, think tanks, that kind of thing.
A few government officials, um, to
talk about like where AI is going and
what it means for defense and security.
So, uh, and security in the round,
not just in, not just in the military
domain, but more broadly, security
in, in the sense of national security
and misinformation, disinformation,
the whole, the whole spectrum.
So, um, yeah, I've been
to, to two of those.
This one was held in Beijing, um, hosted,
um, superbly, I have to say, by the,
by the Chinese while we were there.
Um.
That's why I was there.
That's what we were there to talk about
and to try to help raise, to understand
where it needs to focus, uh, in order
that it can help to guide the UN General,
general Assembly on where it should
focus and ultimately where the Secretary
General should focus his concerns around
the development of ai, specifically
in the context of national security.
I need to be a little bit careful in what
I say because, um, nearly all the meetings
were held either under Chatham House rule
or the Chatham House rule, or, or more
directly, you know, um, in confidence.
Um, that doesn't mean I
can't say anything useful.
I mean, I, I think I, I was struck
by how, um, articulate, informed,
um, and on top of their brief.
All of those that we met on in Chinese
business, academia, and the military
were, um, we had some of the best
high quality conversations around what
AI means for the future of security
and warfare that I've had anywhere.
it's not necessarily a surprise,
but um, to have it confirmed
is, I think worthy of note.
Um, it was interesting to hear their
focus on, which again is public, but their
focus on diffusion of AI and integration
into industry faster than that, that's
their own, is to win economically and,
and insecurity through rapid integration
into their industrial base as opposed
to pursuing frontier research, um,
with a huge CapEx that that involves.
Um, so I think that was interesting.
I think there was only one company
willing to say they're explicitly
working towards a GI and that was
Pu um, uh, who we also met with.
And it was super interesting.
But again.
Being a little bit careful in
saying any more than they would
probably want said publicly.
Um, but yeah, even, yeah, I think,
I think that the, the main point
is that they're not, they're not
necessarily chasing a GI in the way
that say the US big tech companies are.
Um, and their reasons for that
were that they, they, I think they
fundamentally just think their
advantages integration into a huge
industrial base, which will, in the end,
they'll learn more from that process
and therefore be able to accelerate.
Um, and therefore, perhaps if I
might be slightly more cynical, that,
that, that makes it easier for them
to call for like bands and restraint
on the development of a GI because
it's not really gonna affect them.
'cause that's not where they're
focused or investing in.
Um, but yeah, we were, that, that's,
that's probably a decent overview.
Chris: Nice.
Um, yeah, we're gonna speak about
the, um, UK economy later on
and how a GI might affect that.
And I'd love to hear, you know, if you can
say any contrasting thoughts about, you
know, what we as the UK can potentially
do better because, um, you know, we're
a potentially a smaller fish in a, in
a bigger pond with the US and China.
So, um, but yeah, I'd, I'd
love to get into a bit more
of the meat of the podcast.
I know, um, as I say, I first came
across you, Keith, when a couple
of people at Fujitsu, uh, spoke
about your leadership and, you know,
ha, I highly the, uh, respect ji.
So, um, I'd love to just learn little
bit more about your key principles
to leadership and, you know, you've
served in the military and then,
um, jit in the commercial world, you
know, what are kind of the common
threads that have, have helped you to
be a leader and, and, uh, guide you?
Keith Dear: Um, well,
I think, I think, um.
One of the core principles, the Army
defines it slightly better, but the
Air Force has the same idea, um,
is the idea of of serving to lead.
Um, which I think is it Greenleaf
that's also written about
that in the civilian domain?
I think that's right.
Um, servant leadership and I, I,
um, I'm sure I don't always succeed.
Nobody always succeeds, but I think
the aim of trying to understand that.
You, you know, your, your job
as a leader is to empower those
beneath you to succeed maximally.
And sometimes that does involve being
directive, but mostly it involves
listening quite hard, understanding
what their problems are, helping
them to shape the solutions and cr
create the right environment for them.
Um, I think, um, I think humility is,
is important in that, um, area as well.
Like knowing what you don't know, uh, and
therefore being able to listen to people.
But all but you still need, you still
need to challenge, but I, yeah, I
think, I think if I were to come down
to like, one thing, I, I would boil it
down to the idea of servant leadership.
And I think it is Green Leaf that
wrote about the outside, uh, the
military drums there into you from the
slightly cliched, you know, leaders
eat Last idea to the more principled
idea, which is pretty close to Steve
Jobs of hiring smart people, not
to tell them what to do, but to.
to what they tell you to do.
there's a balance in all these
things, but if you wanted like a, a,
a distillation, I, I think that, I do
think that's really important and I
think it requires constant humility.
Um, and that's, that's really
hard and I no doubt I foul often,
but that's what I aspire to.
Chris: Nice.
And, um, co coming from the military
into your words, a bit of a slower pace
into Fujitsu, did you notice, you know,
did you have to adapt your leadership
style style in terms of coming from
that world to, um, you know, more,
more of a City street role, I guess.
Keith Dear: I don't, I dunno
that it's a slower pace.
Uh, it, it, it's lar it's
alar, large organiza.
There are emergent properties
of large organizations, and in
defense, this is true as well.
The larger the organization you're
working in, the slower they tend to make
decisions, uh, the more you have to do.
Like, frustrating, tedious,
but really important things
like stakeholder management.
Uh, again, making sure that
everybody feels like not just has
had their voice heard, but feels
like they have had their voice heard.
Um, trying to win people over all, all
of those kind of persuasion games, as
it were, uh, are just as important.
So you can kind of transfer
that all across in whole cloth.
I do.
I do think there are
important differences.
I don't, I don't think I'm
talking about Fujitsu here.
I'm just talking about large
civilian organizations.
One of the things.
but you don't really realize there's
such a strength in the military, but
you, you can never really take your
boss's job in the military, right?
Like no matter how good you are at it, you
normally have to do two to three tours.
Even if you're an absolute superstar,
you don't go from being, for example,
second in command to commanding officer
or chief of staff to commanding officer.
You certainly don't jump from
halfway down the organization to
the top of it in the same year.
to do a couple of jobs.
You have to get the right
reports, then you go and do some
training, then you come back.
So there's, there are all these pathways.
You are constantly held back.
Now when you are in, it drives you
absolutely bananas, especially if
you're ambitious and you want to get on.
On the other hand, what it means
is that you are never directly
competing with your boss.
Soer servant leadership a lot
easier when you know the person
isn't trying to take your job.
Um, uh, it's why I think like chief
of staff functions work better in the
military because as you can afford to
let your chief of staff focus on what
we call the down and in, right, like
managing everything in the organization,
um, looking after it while you do
the up and out as the, as the CEO.
And I've been in both those roles,
like doing the down and in and trying
to do the up and out and uh, and,
but in the military, that person
literally can't take your job.
Whereas in the civilian world,
the board might well look and
say, well, hang on a minute.
Like, this person is actually
leading the organization day
to day is doing an amazing job.
And you know, all Keith's doing is running
around talking to senior people and he
is at events and he's doing the, like,
maybe we should just promote that person.
So, so you can't, I don't think in
the civilian world you can afford, you
have to have that balance is, uh, of
like, uh, competing and cooperating.
It's a different balance in the
civilian world because you can
promote so much more quickly.
So.
I, I mean, and that, uh,
the very senior ranks in the
military, that stops being true.
And actually then the
parallels are much closer.
So once you get to one star in the
military, um, you are competing with
everybody around you, essentially
for the next, the next job.
So it's a bit, is a bit more, and
it frankly tends to get a little
bit more, well, the nature of the
competition, the cooperation, um, and
the competition changes at that rank.
So I do, I do think there are important
differences, and I did, I did have to
kind of figure out how you do that.
Um, I think, um, to the extent that it's
possible, like Ray Dalio's idea of radical
transparency is really helpful there.
Like being like, okay, I know you might
want my job, or the other way around,
like, you know, that I want yours.
Um, however, alright, how do we.
Succeed against these objectives
enable you to succeed so that you
know, I'm taking your job when you
move on to bigger and better things.
Or like, I mean, that's, that can
be, you can't go too far with that.
But I do think like, trying to
work through that and that's
why, that's what I try and do.
Well, you'll know this, but that's
what I try and do with recruitment of
being clear of like, look, I know that
you wanna be CEO of your own business.
I can help you get to a point
where you're ready to do that.
let's be honest now, like, how
long do you wanna work here?
How do we get you to the position
where you're ready to move on and how
do we ensure that you move on in a way
that's really positive for everybody?
And doing that from the outset, I
think is a way of kind of, try, I try
to have those conversations really,
I've had one this week with somebody
I'm hoping will bring on next year.
Um, how do we make sure that that
person achieves their ambitions?
Even if you acknowledge from the
start, those ambitions can't be
achieved without, you know, me
leaving or retiring or whatever.
Um, so, I, I, I think, I think, uh,
yeah, I think there's some things to
be learned there in, in the differences
between military and civilian
leadership that are quite important.
Chris: Nice.
Super interesting.
And la last question on leadership.
You've obviously made the, uh, transition
from a, a, a corporate, um, uh,
managing director role to now a startup.
What, what, how have you had to adapt
your leadership principles and, and style?
Um, what's been the biggest difference?
Leading and, and building a,
a small startup like Cassie.
Keith Dear: the risk of maybe being
too candid and greatly amusing.
People that I've worked for over
the years, like not, not just
the corporate environment, but
throughout most of my career, I was
an intelligence officer, then I was
an advisor to the Prime Minister.
Then in the large corporate.
Yes, I ran my own section, but there's
a big hierarchy above me as well, right?
And to an extent, order for me to get
anything done, I had to persuade the
person above me that it was a good idea.
And therefore, in that sense,
was still an advisor in a sense.
I mean, I had responsibility downwards,
but in a sense I was advising
someone upwards who had to say yes.
Um, and then probably another 10 people
above them 'cause large organization.
A thing that I have found, uh, is that
it is much, much easier to see things
clearly, to actually see things clearly.
I don't mean to denigrate for a
second, but to actually see things
clearly and to give advice than it is
to be the person making the decision.
I, it was, so I, that is a really,
like in, in the startup as the CEO,
um, I'm the one constantly making the
decisions rushing from issue to issue.
There's a million things all
happening at the same time, and
I'm involved in the thick of it.
And there is, there's just something
different to being the person that
makes the decision to being the
person that's giving the advice.
And often when people give you
that advice, you think, oh.
Two years ago, I'd have been
telling, I would've seen
this the same way you see it.
And I'd have been telling someone
else this, and yeah, I couldn't
see it until you said it.
And that continues to surprise me
because it, it, I'm not sure I ever,
I'm not sure I ever saw it until I
was in, I never really understood
why that it would be different.
And it is, it is different.
It is fundamentally different, um,
when there's, there's no one else.
It's just, it kind of sits with you.
So, yeah, there's a big lesson there.
And if you're listening to this and
you are currently an advisor, have
some sympathy for the person that
you're advising more than I did.
Chris: Yeah, well, well put.
Um, and I, I, I think that segues
nicely back onto prediction.
Uh, just talking about a GI for a second.
Um, you know, forecasters predict
a GI by 2026, I think, you know,
wrote, I probably wrote this agenda.
About four weeks ago, maybe longer,
and it's probably changed since then.
So it may be slightly out, but,
um, and full a GI by around 2030
and super, uh, super intelligence,
um, around 29 months after a GI,
um, depending on your definition.
Um, you know what, when do you think,
um, we'll get human level a GI in?
Um, do you agree with
the current predictions?
Keith Dear: Uh, well, I mean, firstly
I would say, who am I to disagree?
Like what we know from the science
of forecasting that is so core
to what we do is that the median
estimate of a crowd forecast,
particularly under deep uncertainty,
outperforms the individual estimates.
Um, and that there are a small number
of people that are super forecasters.
they, the median of their
forecasts out predicts their
individual predictions as well.
So the most principled forecast
you can get is from a very large
number of predictors, weight,
those that are consistently the
best predictors, most accurately.
Um, right now, uh, the
world's largest crowdsource
forecasting site is meticulous.
That's where those forecasts
that you quoted come from.
Uh, and so I've used those for
years in my writing and, uh,
talking about this subject.
So, um, so I will, I will offer you my
view 'cause it's, but it's just one tiny
data point you should take the median of
like, something like meticulous, uh, well
come on to how AI might and is helping to.
better than humans, but
let's come onto that.
So for now, um, me predicts weak a GI
by 2027, but weak, weak, A GI is really
interesting, weak, a GI that was,
uh, defined I think in 2020 or 2021.
So pre-chat, GP t's quite
a technical definition.
Um, it's got things like, uh, the
winter grad test, which is a kind
of grammar based language test that,
that, um, can trip machines up.
Uh, it's got, um, I've forgotten
the name of the game now, but it's
got, um, one of these eighties
like Atari type games that was, um.
a thought to be particularly
difficult for machines to
solve a whole list of criteria.
Most of the criteria that defined
back then have already been achieved.
The WIN grad test is one.
AI now can, you could do it.
You could like go put the Wi Win
grad test into Wikipedia, take
some of the questions, drop it into
the chat GPT, it'll solve them.
It'll solve new ones that you
can come up with that at least in
theory, weren't in its training data.
So loads of those things
are already solved.
There's only a few things to go and
that's why that prediction is so soon.
But remember when it was written, people
thought this was like, well, I think, I
think 2050 was when people originally, and
now it's like, well, we're almost there.
That's why the prediction is 20.
And I think that's important.
Even if we could probably all agree, when
you look at it like that's not really like
I'm much smarter than the things that,
there are things here that I can't do.
But actually iu, most of
the people we know are.
That's not us, that's not
my general intelligence.
I can do a lot more
than just these things.
You're talking montezuma's
revenge is the game.
I can do a few, a lot more than
just complete montezuma's revenge
and do the winter grad test.
Like it doesn't sum up my intelligence,
but it also shows the way in
which we redefine what we mean by
intelligence and general intelligence.
Every time a machine's able
to do something, we go, well,
we need a new definition.
we, KGI 2027.
Oracle, a GI 2029.
Um, broadly defined as an AI
that can answer any question we
might pose to it than any human.
Now, this immediately runs into another
thing of, well, what do you really mean?
do you mean it answers it
better than this human, better
than Keith, better than Chris?
Better than Chris and Keith working
together better than Chris Keith and
their hand selected group of like
amazing people for the pub quiz.
Um, do better than all humans, better than
a crowdsource forecasting site like Wiki.
Be like, well, what do you really
mean by better than humans?
Better than all humans who've
ever lived better than the most
genius humans on the planet?
So, or better, just better than the
average human, whatever that might be.
So, so there are a whole load of things
here that, that invite contestation
and make these terms controversial.
Um, and then when we talk about super
intelligence, again, there are different
definitions in the one in meticulous
basically is an AI that is a GI, but
also much better than us in robotics.
So proprioception, our sense
of ourselves in time and space.
Dexterity, manipulate, ability to
manipulate objects and locomotion as in
our ability to move over broken terrain.
Things that historically we found hard
to get machines to do, but which we find.
Natural because we've got 300,000 years
of evolution and you know, how many
million years of evolution be before that,
that have enabled us to do those things.
So, I mean, firstly, we should
note how profoundly important the
achievement of any of those things are.
So, a general intelligence that can
do all the things, even under the
weak definition, is one that's gonna
profoundly disrupt the jobs market.
And one that can answer a
question better than any human.
2029.
The forecast for that, uh, is, is gonna
be pretty important for humanity and
one that can then do all physical tasks
better than us begins to beg the question
of, well, what on earth are we doing?
I mean, definitionally, there
are no jobs at that point.
on the other hand, uh, we
also have to consider the,
um, the different definitions.
So super intelligence, I think Sam
Altman and Dario Emoti sometimes
describe super intelligence
as an AI that can, uh, deliver
independent scientific breakthroughs
or an AI that can recursively
self-improvement, improve itself.
All those things are
slightly different things.
All of them have really
profound implications.
when you predict against the definition,
it really matters how you define
like, well, how do we know you'd
achieved it back to what Cassie does.
Um, so yeah, so I I, my, my personal view
would be to go with those predictions
to say, well, there, that's the
most principled forecast we can get.
Uh, and until I can find a better
one, that's what I'm gonna go with.
Um, there's another slightly more
robust definition of a GI on, on
meticulous that predicts 2033.
Um, what I think I am absolutely
convinced of is that this is
going to happen in my lifetime.
It's probably gonna happen in the
next five years, more likely than not.
So, more than 50% chance.
Um, and therefore, like it, if you're
not betting on ai, as I've often
said, you're betting against it.
And like my going all in on the
business is my saying, no, look,
I've really got skin in this game.
I really believe it and I'm proving it
by like kind of taking a chance myself.
Um, so yeah, that's why I think,
and that's why I think it, I
mean that's at a high level.
We can, we can get into the underlying
trends and the other things that
are driving that change if you
want, but that's the, the top level.
Chris: Yeah, let's, um, really well put,
I think, um, you know, researchers at
Google will would argue that, um, you
know, in some ways Agis already here, you
can, you, you've said to me off, off air
once, uh, but I think it's worth saying on
air that, you know, it can translate, um,
you know, paragraph in into 50 languages,
something that many humans can't do.
What, what you thoughts there?
Keith Dear: famous paper from a few
years, couple of years ago now at
least, um, from Ethan Mooch at Wharton
Business School, um, talking about the
jagged frontier of AI capabilities.
Um, you know, AI is already super
intelligent in multiple domains.
If you wanna win a game of chess and
like, you can play, or you can hire
Magnus Carlson, the, the leading
chess player, um, or you could employ.
Um, AlphaGo zero for chess.
Well, you should, or, um, stock
fish or one of the chess engines.
You, you should, should, you should
employ the machine, uh, because it
consistently outperforms both humans
and human machines working together.
Um, that's increasingly true for a
whole range of, like scientific tasks.
It's increasingly true for a
whole range of cognitive tasks.
Um, and, and I I did post on LinkedIn
and Twitter, and I occasionally
wondered whether I should have done
this, that I thought when oh three
was released, that it, it is a GI.
And the reason I said that is like,
if you have a question that you
wanna ask, you could ask me, but
I would suggest nine times outta
10, you, you should ask O three and
it would give you a better answer.
And the same with if you
think, um, who wants to be a
millionaire and phone a friend?
If I had the option of phoning a friend
or putting it into chat, GPT, uh, so
yeah, GPT five, now the pro version.
I would use that every single time over
phoning a friend every single time.
And I'd suggest, so should
you and so should anyone else.
And they would win more often if
they did than phoning up their mom,
their dad, their best mate, their,
you know, favorite pub quiz champion.
so I think it's already smarter than
most of us in individuals, certainly
in the breadth that it covers.
That doesn't mean that it's better
at all, humans at all things.
There are still some things, you
know, if I'm writing a blog post
for example, I find that it's useful
to start by prompting GPT five.
It's useful to then go away and write it.
It's that useful then to go back and
like refine the paragraphs using it.
But it, I I, I have never been able
to just prompt it and publish, for
example, and I don't think ever would.
Um, so yeah, so I, um, you know,
it, it, my, my point is I think you
defensively can claim we're already there
in that jagged frontier of superhuman
capabilities, whilst also some limitations
in some areas is, is gonna continue.
We're just, we'll see ever more
superhuman performance in some area,
and then we'll see it dragging up the
baseline in the areas where it's weaker.
Chris: Fair enough.
I I, I think some people would say,
you know, we, we can't trust AI yet.
It hallucinates, you know, when
you go on chatty pt, it just quotes
some random thing from Reddit.
Um, you know, what, what are your
thoughts on hallucinations and.
Keith Dear: have ever
worked with other people.
I mean, you know, the, the confidence, but
totally un evidenced assertion is a, is a
kind of staple of most human conversation.
Um, you know, we, we know from
Robert Trivers work, which I'll
probably over quote, but, you
know, we did, we evolved to deceive
ourselves to better deceive others.
Uh, more crassly, we evolved to
believe our own bullshit on the basis.
It makes us more convincing when
we're trying to convince other people.
Um, so like even your own beliefs are
slightly shaky and you should, hence
the need for intellectual humility.
Um.
I, I, I, I, I, I'm not, I I, what?
I wanna be clear on what I'm not saying.
I'm not saying you should always
trust the machine over the human.
I am saying that what we ought to be doing
is baselining and seeing like, well, okay,
but how often in this particular thing
that we care about, um, does the human
have a false positive or false negative?
How often does the machine, and therefore
how much should we trust the machine?
Like, an obvious example is
self-driving cars, right?
The, the, the question isn't,
is it, are people gonna die when
self-driving cars are on the road?
The question is, is it gonna make fewer
errors than people do, in which direction?
Um, and you know, then the really
uncomfortable question that we don't
wanna answer is like, okay, but like, how
many casualties are we willing to accept?
'cause then you.
That would determine like where
you put the parameters in.
So should it stop every time
someone steps in front of it?
Well, now no self-driving
car can drive anywhere.
You see what I mean?
Like, we have to answer some
difficult questions that we
don't answer when it's humans.
We just go, no, it'll be all right.
Um, but in effect, by allowing people
to drive, you've already made a
decision that you're gonna accept a
certain number of deaths every year.
Should we reduce those?
Yeah, I think we should.
So, so justice for self-driving
cars, which make errors.
So for other decisions, it's about
being rigorous in how you think
about it and not having this kind
of, well, this, this view of human
exceptionalism or humans are just better.
Well, often they're not.
Um, having the conversation about when
you should trust AI forces you to think
a lot more about human judgment in a
way that unless you're a psychologist,
you're probably not used to doing.
Chris: Nice.
Yeah, I feel like, um, you know,
if a self-driving car makes an
error, um, it's all over the news.
And it might just be, you know,
one, one time, but then, you know,
is, is anyone actually presenting
the statistics and the differences?
Um, I just think there's maybe a little
bit of resistance to change there.
So, um, and cool.
Putting on your advisory hat, um, you
know, as you said yourself, you're
a former advisor to, to number 10,
um, your a GI and, and, um, how
it's gonna affect politics and,
and the UK's position in the world.
What, what's your kind of thoughts there?
Keith Dear: Let me offer
two things, so on, on.
Firstly, I'd say like definitionally
def definitionally, if an ai, uh, is
smarter than all humans, so if it can do
any cognitive task that we can do better
than we can do, and if in time that
adds also physical capabilities, then,
then by definition there are no jobs.
every time you say, okay, well.
Chris, I'm gonna go and
move into this new area.
No, no.
If the machine is smarter than
you, if it's physically more
capable than you, then it's going
to do that job better than you.
Like.
There's no two ways about that.
Um, now it might, that doesn't mean
that there won't be a human preference
for some human produced things in the
same way that you might go to a craft
fair today and buy a sort of pepper pot
that is not very good, but you know,
was, you know, carved by your neighbor
or someone in the village or whatever,
um, or just was carved by a human and
you admire the craft even though, you
know, you could go to IKEA and get one
that works better for half the price.
But, you know, it wasn't made by a human.
So it doesn't mean there is nothing
but it, but it does mean that
definitionally the job could have
been done better by a machine.
And that's gonna, the, like,
the levels of unemployment that
flow from that are enormous.
Why start there?
Well, because I think that's the, that
is the most important consequence, the
most likely consequence, and the one
that we appears to be paying nowhere
near enough attention to in our national
political debate or our international
political debate, because that would be
so profoundly disruptive, you know, for
a, a Chinese Communist party, for example.
The, the, um, rhetorically is there to
represent the workers and to enable,
you know, the, the working classes to
thrive, or there are no working classes
under that model, or at the very least,
there are very few jobs for them.
Profoundly disruptive, no less
so for democracies where, you
know, wealth inequality is likely
to accelerate when this true in
China too, but wealth inequality
is likely to accelerate rapidly.
Um, so what do we have now?
Do we go for something that looks
like fully automated luxury communism?
Uh, do we, do we go for something
that looks like oligopoly with a small
number of like very wealthy benefactors?
Like how do we structure?
I I think there are
answers to those questions.
Um, John and I have spent a lot
of time talking about digital
democracy and how you would build
for a world that looks like that.
But I don't see those
conversations happening.
Certainly not at the senior
levels that you'd want, and
certainly not with the public.
And yet 20 27, 20 28, 20 29.
Right.
For the different forms of a GI, so, um.
So I think there's, there's that element.
Then I think from an international
security perspective, one of the
arguments that, not just me other
people made, but I'll, I'll, I'll
own these comments as my own opinions
and not those of anybody else's.
Um, the nation's relative power in
the international system will be
completely transformed by the combination
of automation, robotics, and ai.
so it means nations
are now less dependent.
Companies and countries for that matter
are less, less dependent on people
to create wealth, um, and to scale
their armed forces to deter or fight.
you know, you can now create
new goods and services.
You can create new products using
automation, robotics, and AI with far
less than perhaps no reliance on humans.
I think if you think about where Elon
is trying to take the world with,
um, space colonization, it means
that now you could generate huge and
perhaps unprecedented wealth in space.
So now you're no longer dependent
on the geography on the ground.
I think things like metaverse
economies are already, um, beginning
to transform the world and will
only increase it both in terms of
influence, but both also in generating
money in, in vast virtual worlds.
So, like I wrote a paper a few
years back called Beyond the Geo and
Geopolitics where I looked at this and
I, I, I still think in the long term
the things I talked about in there, or
maybe even the medium term, the things
I talked about in there are true.
Um, and I think we also need to
think about what it means for
the kind of the space economy.
But for a nation like Britain, if
you're na, if your relative power
in the international system is no
longer dependent on your demographics
or even your own local geography,
well now the returns to start
smart strategy are unprecedented.
Like getting your strategy right is what
is gonna determine whether you succeed.
Um, and perhaps you can see, kind
of segue into why I think Cassie
could be so central, uh, in that.
Chris: Nice.
Yeah.
I lost to, um, yeah, talk about
that with, with C in a second.
Um, do, do you think there's a danger of
one co, uh, country or state owning a GI?
Um,
Keith Dear: Well,
Chris: yeah.
Maybe let's not get into specifics of
who, but there's seems to be an open
and, and, and close source race at a.
Keith Dear: clearly there is right for,
you know, so effective diplomacy, um, and
effectively military tactics or strategy
or product development or invention, all
of these things, you know, weapons, even
nuclear weapons, all of these things
are the product of intelligence, right?
Um, they are dependent on intelligence.
So again, definitionally, if one company
or one country has access to smarter
than human AI and nobody else does.
Definitionally, it's gonna be better at
diplomacy, at tactics, at persuasion, at
strategy, at developing new products, at
developing new weapon like definitionally,
its relative power is gonna be
vastly increased over anyone else.
if, that a GI is capable of
recursively, self-improving.
So you get to this a GI now it's getting
better every day, every day, every day.
Maybe every second.
I mean, maybe that's an
exaggeration, but certainly every
day is completely reasonable.
If it's recursively self-improving at
that rate, there's a real, real risk of an
intelligence runaway where the company or
country that gets their first just leaves
everybody else in the Dustin, you can
literally never catch up, never catch up.
That that is, that is a realistic risk.
under the Biden administration, the um.
White House published a, um, memo for
National Security, which, um, AI for
National Security on AI for National
Security, which hints at very strongly
the possibility, I think might even state
explicitly, but certainly indicates very
strongly that the US could nationalize,
take ownership of open AI or any of
the big AI houses if they get to a GI.
Um, so if that happened, US industry,
US government would now have access to
a thing that enabled 'em to increase
their relative power and continuously.
What that means for other nations
is that you would live under the
US AI umbrella in the way that we
currently live under the US nuclear
umbrella, um, where you'd need them.
You'd need the support of this smarter
nation for your own protection in
all the ways that I've described.
Um, well that's a, that, that kind
of, um, even with allies, that that
kind of protection is never free.
You have to sacrifice quite
a lot in return for somebody
else looking after you.
So I, I, I think it matters profoundly,
even if it is an ally, as close as the us,
if, if it's a country that ideologically
is, um, very different, uh, and, and
perhaps adversarial towards the uk, then
I think it could be completely disastrous.
Um, like the, the, the, the values
that are embedded in that ai, if it's
smarter than all of us are likely to
become and there's no counterbalance
is are likely to become the values
that we are all forced to kind of, um,
al to I suppose or live, live within.
Um, so yeah, I, I think it's,
it's profoundly important.
Um, yeah, I hope that the UK is, uh, I
don't think you can stop the development.
So I hope that the UK is one of
the nations that gets there at the
same time as or first, but then
if you live in another country,
you should probably be hoping it's
your country that gets there first.
Um, and none of
Chris: Yeah.
Keith Dear: insulates you
Chris: Fair.
Keith Dear: It doesn't, like,
even if it is the uk there's still
ton, a ton of risks for all of us.
Chris: Yeah.
Uh, I, I hope so too.
Um, in, in last questions on a GI, um,
you know, if you were advising number
10, um, you know, you've mentioned
one of your biggest concerns, or, or
do UK in general is job displacement.
Um, um, you know, what, what
kind conversation should we be
having to best prepare for this?
Keith Dear: well I think firstly
you'd usually increase your r and
d budget and you'd make the case
for why you should be doing that.
Like we need to be spread
betting across this.
I don't, I don't think we can outcompete,
um, not least 'cause they've now got
a 18 month to two year head start.
but the big USAI companies, um, in
developing the, uh, infrastructure that is
likely to see the, the rapid acceleration
with the current models and therefore
you go, well, well, where might there be?
Like the leap ahead technologies?
Where could we spread be Area has a
program called Nature Computes Better.
That's an example.
There are other, uh, examples
of alternative architectures and
approaches that we should, we should
consider investing heavily in to
give ourselves a chance of winning.
Um.
I think we should be having the
conversation with the public about how
disruptive this has the potential to be.
And I would like to see that led number
10, ideally, uh, and from all ministers.
'cause there's not a single
department where this doesn't matter.
And I'd like to see the
opposition similarly engaging
with how profound this could be.
Not, not saying it's definitely
gonna happen, but saying, look,
this is a thing we need to start
taking seriously as a country.
Uh, and so that it was at least
one of the top three issues.
Whereas I think it's still regarded as
a kind of fringe science fiction fantasy
and something that, you know, slightly
absurd, um, an adolescent to talk about.
And it's, it's not, it's
none of those things.
Like nobody, nobody who thinks about
it for any length of time thinks this
isn't a thing that we take seriously.
So I'd, I'd like to see that that
conversation driven, the, the problem of
that, of course is if you did that, would
require the reassignment of resources
and the reassignment of those resources
is necessarily likely to be unpopular.
It doesn't matter what you take it
from, somebody's gonna complain.
Um, but then I, I think against the risk
that we face, that's absolutely essential.
And, um, I also think that, and
to be fair, both the previous, um.
Ministers and current Prime Minister since
Rishi Soak at least have all supported
both Aria and the AI Safety Institute.
The latter working on, uh, alignment
research and trying to make sure that we
don't build an AI that isn't aligned with
what we want, and therefore neither from
malice nor, um, through over optimization
on a given goal, doesn't cause quite
widespread distress or even death.
So I think you'd, you'd, you'd also
continue to put money into that area,
which, um, again, uh, I think we have,
but we, you could always do more.
I think we should lead
that global conversation.
So, uh, there's some of the areas,
there's a lot more of course to
that, that as a starting point.
Chris: Nice.
I think, I think that segues nicely into
Cassie and who you are trying to help.
And, um, you've, you've already spoken
a little bit about Cassie and, and who
you are and, um, we're kind of gonna
go into the, the blog that you wrote.
Everything is prediction, I
think that explains, uh, pretty
well, um, what you're doing.
And I'll, I'll link that
to the show notes after.
Um, but yeah, do, do you
want to just talk us through,
um, that philosophy of
everything is prediction.
Keith Dear: Yeah.
Will do.
Um, so the idea everything is
prediction ul from all of the
work and research that we've done.
It's the idea that, okay, well, um, a
decision is a form of prediction, right?
Like, if I do this, I expect
this to be the outcome.
I'm predicting that if I do this,
I'm expecting this kind of reward.
An action is a kind of bet on a payoff.
So if I reach for my coffee now,
which you can see me doing, uh, and
uh, I take a sip of it, I'm making a
prediction that the coffee's gonna be
the right temperature to drink, that
it's gonna wake me up a little bit.
I got all these things, like they're all,
there's priors, if you like, probabilities
in my head as to how likely that is.
Now, I don't think about it explicitly,
but I definitely assign them.
And you can tell that when you take
an action because I, this had happened
to me in the car recently, and hence
I've started using it as an example.
I reached for my coffee cup.
I took a sip, it was hot, it
burnt my tongue, updated my
prediction, and then like.
my prediction and waited for it to
cool down before drinking again.
So an action is about on a payoff, and
you update it when you get information and
feedback that tells you that you're wrong.
But from that, we can extrapolate
and say, but actually, tactics,
operational plans, strategies,
or, or even day-to-day decisions.
They're all, they're all just
forms of prediction, but over a
different scale and time horizon.
Like, if I do these things, I
expect this to be the outcome.
In order to know that, you know,
you have to get over that Harvard, a
business review point, that most of
your strategies are not strategies.
You have to say, what
is it you wanna achieve?
When do you wanna achieve it by, how
would you know you'd achieved it?
And there's a whole raft of incentives,
why organizations don't wanna do that.
But in the end, everything is prediction
and being honest about that enables you
to say, okay, well, um, if everything
is prediction, like I predict these
things, I expect this to be the outcome.
What do I think the most
influential factors are?
What do I think the probabilities are?
And you can see from, you can see how we
get to that, through what we do at Cassi.
Um, and I, I think that's the
world that we're moving into.
In any case, I think, um, we get to a GI.
move into an allocation economy where
what determines your success, and this is
already true back to returns, to smart,
smart strategy, but is how you resources,
time, money, attention, how you allocate
those things will determine your success.
Um, and what Cassie does is show you,
okay, well these are the factors that
most influential in your outcome.
The thing that change in X leads to a
change in Y and now you can do optimal
resource allocation to constantly
maximize your probability of success.
I mean, implications of that are
genuinely, genuinely profound.
Um, and it's why I'm so
excited about what we do.
Chris: Yeah, I think speaking to you,
um, in the past, you know, seems super
passionate about what you're doing,
I think since actually in March, um,
you know, speaking to me off there,
you've had some incredible wins already.
Um, you know, who are you trying
to help and you know what, why
are they getting so many wins
from what you're doing with them?
Keith Dear: Yeah, so, so we're trying
to help, I mean, a, a lot of our early
traction, perhaps, unsurprisingly, given
my background, has been in government, in
defense, in security, and we're helping
there by being able to show, obviously
have to be, um, protect our customer's
interests here and not saying too much,
but once they say what their outcome
is, we can help to show trade offs.
So back to, um, my prior
experience in number 10.
When people bid for resources,
they tend to say, if you don't
fund this thing, the consequences
for us are gonna be catastrophic,
but often funding this over here.
your probability of achieving
this thing over here.
And well, we can show that in
much more rigorous, detailed ways.
We can show, okay, a change in the
probability of X leads to a change
in the probability of Y sometimes
complimentary, but sometimes not,
uh, and allow you to make those trade
off decisions much more explicitly
and in a much more informed way.
You can also see the gap between
human and AI predictions.
We have, uh, one customer where we were
able to show that the AI was predicting
that something would happen with like
96% confidence, where the humans were
about 51% in the other direction.
So more or less a coin toss, but slightly
leaning towards it, uh, not happening.
and then when we were able to expose,
well, why is the AI forecasting
differently to us, it was because all of
the factors that the humans were thinking
about are things that had happened before.
Um, all of the, um.
Yeah.
And all of the various things
that were relevant to it had
happened in a way before.
I'm showing the base rate, showing
how the AI had reasoned led the humans
to go, do you know what, actually we
think the AI's right and we are wrong.
And that really matters.
Like if you think it's a coin
toss and slightly more like your
resource allocation against a
bad prediction will be terrible.
So we can hugely like help you optimize.
getting traction in the financial
insurance services industry.
Why?
Well, because, uh, we can show
you the cost of being wrong and
sometimes call that EV plus.
So.
In high frequency trading, right?
You already use most of the
time fully automated models.
Um, and they're predicting
the next number in a sequence.
Well, that's not what we do.
We, what we do is when there's no good
prior probability distribution, we are, we
help you to make the prediction of like,
well, what's likely to happen based on
the information of a, what humans do now.
So normally you might be high frequency
trading with automated, you know, it could
be chaos, theory based models or whatever.
But at the top level, you're still sat
around a boardroom table arguing over
whether it should invest in mining stocks
or, uh, you know, macro or, or whatever.
Like you've got all these different
strategies you've gotta argue over.
We can tell you a number of things.
We can tell you, uh, like if you do X
what the probability of you achieving
y like a particular return might be.
We can score all the people around
the table for who maximally reduces
uncertainty and who's the best forecaster.
And therefore we can tell
you whose decisions to wait.
And we can show you an AI
outperforms you as well.
So equally, if you're trying to select
a quant model, we can, we can show
the probability that one quant, quant
model, um, contributes to your achieving
your outcome compared to another.
So, which one you should trust the most?
Like we, yeah, I mean, I, it, it's
exciting because we help our customers
in multiple ways from optimal resource
allocation to understanding risk, to
seizing opportunities, to making their
strategies, things that constantly
update as new information comes in.
Um, and I, as I say, I think, I think
while it is being transformative for
those that we work with, and that's
only gonna increase as we, as we
kind of grow the business, and it's
why we're growing as fast as we are.
Chris: And yeah, I, I, as you
said on the, on the blog around
everything is prediction, but you
also talk about the narrative.
And certainly in, in organizations
that I've worked in, uh, they seem to
be more narrative led rather than, uh,
you know, potentially more data driven.
Um, but some would say the narrative
is more important, as you said.
Um.
What, what, what do you say to this?
And, um.
Keith Dear: look, narratives, narratives
are important, um, but narratives
based on truth are even more powerful.
So if you, if you rely on narrative,
narrative alone, you're gonna
be constantly mugged by reality.
Like, it's, it's not okay to
just completely ignore reality.
'cause eventually, you know, like
Mike Tyson said, eventually it
punches you in the face kind of thing.
Uh, so, I, I think narrative does matter.
Narrative's how you explain
your strategy to people.
But people should be inter and, and
often are interrogating that narrative.
And if you keep up a false one, eventually
it gets exposed as, as reality updates
on you and your, you know, things
collapse around you at that point.
So, reality earlier is what we do.
It, it doesn't mean
narrative doesn't matter.
As a leader, you're still gonna have
to explain why you're doing things.
Also isn't the case that you always have
to rely on Cassie's probabilities, right?
Like you might see, okay, I've
crowdsourced across my organization.
And my organization thinks there's
only a 10% of us succeeding in
this outcome that I wanna pursue.
And they think these
factors are more important.
And I've used the AI and the AI thinks
there's a 9% likelihood and it thinks
these factors are most important.
But ICEO, senior decision making
minister, whatever, I think based
on my expertise, that I know better.
Uh, okay.
Well, that's absolutely what you
get paid to do sometimes is to
make decisions that are hard.
Uh, and the difference is though,
that we will constantly score.
Okay, well, um, how much of the things
you say would influence the outcome,
actually influence the outcome?
And when you said it was an X percent
probability, how often were you?
Right?
Over time?
And so we can begin to weigh you,
your decisions as a leader in the same
way that we're awaiting the analysis
and the forecast of those around you.
And so over time, we change
how HR works as well.
You, you, in theory, you should promote
those people that are consistently able to
successfully bet against the odds, right?
Um, so, um, I'm not saying
narrative doesn't matter.
Narrative is still a part of leadership,
but getting to what your narrative is
should be an exercise in truth seeking.
And I, I, I think as people use
Cassie, those organizations that
are willing to confront reality
will outperform those that don't.
Their stories will
therefore be more powerful.
They'll be more believed more often.
And the competitive advantage,
I think, will speak for itself.
Chris: Yeah, it's, it's interesting
and you know, when you are going
into these organizations, you
help give them a prediction.
Um, you know, I guess this may
be true with larger organizations
where it can be a bit harder
potentially to influence change.
How do you help them turn that,
uh, prediction into actionable
and, and good strategy?
Keith Dear: So the first thing is, is
working with them to help them define
their outcome in resolvable terms.
And that for now at least, uh, still
involves some consultancy and some human
engagement to really work and say, is
that actually what you wanna achieve?
And, you know, uh, how would
you know you'd achieved it?
Like what are the criteria?
It might be the BBC reports.
It might be the CEO says it might
be our annual sales are like, I
dunno, how you would determine
it depends on the organization.
but you, you work with them to get
to a resolvable outcome and say,
okay, yeah, if this happened, it
would resolve one way or the other.
Um, that normally takes some time.
The, the factor, like the factorization
element and the scoring, that usually
involves a little bit of worry.
Um, and one of the ways I reassure people
is we, we will only show you that it's up
to you, but generally we say the top 10.
Uh, so the top 10 forecasters and
the top 10 analysts who are the most
accurate if they say something is or
isn't gonna happen, how often are they?
Right?
Who are the most well calibrated if they
say it's gonna happen with 80% confidence?
Those things should
happen 80% of the time.
Your calibration is the gap
between that confidence and how
often things actually happen.
Um, and then uniquely to us,
like who are the best analysts?
Um, so who are the people whose
questions maximally reduce
uncertainty around an outcome?
We only show the top 10.
I mean, it could be the
top three if you want.
Um, the reason we do that is like,
it's a strictly proper core scoring
system, as colleague John would say.
Um, that means that you can
only optimize for truth seeking.
Like if you, you can't really game it.
If you deliberately put bad predictions
and bad questions in, you'll just
get weighted down and down and down
over time until you're not really
making any difference to the platform.
it also means you as an individual on
the platform are constantly getting
feedback on your own performance.
How good am I Keith at forecasting?
And at reducing uncertainty with my
analysis, uh, and that we know from
Philip Jet Lock's work, that getting
that kind of feedback enables you to
constantly improve as an individual.
So by only showing the top 10, we
don't punish anyone for being bad.
We just give them the chance to improve,
but we incentivize them to, in scoring
the top 10, so that when they're arguing
for something around a board table and
say, look, I'm in the top 10 over here.
You should weight my, you know, you
should weight me more heavily than others.
Um, that said, we can only do so much,
you know, in the end, um, Cassie can be
like a bucket of cold water in the face.
Like it, it might well tell you that the
thing you want to achieve only has a very
low probability of Uh, at that point, you,
you can continue if you want, um, or not.
And there are organizations who,
at that, that point decide they
don't wanna talk to you anymore.
And that's okay.
Um, I think over time those
organizations will by the others that
are more willing to confront reality.
Chris: It makes sense.
And I, I think that's kind of segues
onto, uh, the final question here.
Um, you know, innovation, um, and
the need to redesign organizations
can be a barrier to success.
Um, you know, what, what advice
would you give to organizations here?
Keith Dear: Well, um, I think
organizational design is also
just another form of prediction.
everything is prediction, you'd
expect me to say that, but it is like
when you design the organization,
you are predicting that this
structure, putting these people in
this position, putting that, that
will ultimately lead, increase your
probability of achieving your outcome.
So again, you'd expect to set me to
say this, but I'd say it intellectually
as well as for commercial reasons.
Uh, you should use Cassie for your
organizational design, for your
trans, for your transformation
itself, uh, in order to make sure you
optimally design your organization
to achieve the outcome that you want.
Um, I think that.
Uh, one of the things that we insist
upon, um, completely insist upon is
that what we do with organizations
is not innovation theater.
So you need to think from the very
beginning, how would you integrate
Cassie's solution into your
existing decision making mechanisms?
Um, what would success look like
for us integrated into your system?
So for you integrating
us into your systems.
So it's proof of value from the start.
Like, if we did this, you would be
convinced that you should adopt this
in every single decision you make.
So spending time with organizations
to be clear about that is a really
helpful way upfront of making sure
we're clear on where we're going and
we don't become this thing funded from
some innovation budget that after a
year, you know, a couple of people
get promoted 'cause they did something
interesting, but then gets dropped.
Um, so yeah, so I, I think that, and
I think if you're not, if you think
your organization is not a truth
seeking organization is gonna be
discomforted by having to face reality,
then we're probably not for you.
You know?
Forgive me, but we called Cassie after
Cassandra, uh, you know, the, um, Greek
or Trojan pretest con, condemned always
to be right and never to be believed.
Um, you know, we tell you
what you need to know and not
necessarily what you want to hear.
Uh, and if that's not for you,
then maybe you should hire more
traditional consulting organization
and not, um, Cassie's Truth engine.
Chris: Nice.
And, um, you know, there's anyone
listening, um, you know, who is
potentially in a position and, and
interested, um, by what you're doing.
Is there anything else you would like
to, to say about Cassie and, and what
you're developing and, and why now is
the time to consider a company like you?
I.
Keith Dear: think now, now is the time
because as I said, we're moving into the
allocation economy in, in, in, in which
as machines become increasingly smarter
than us, as they increasingly, um, change
how we make decisions and ultimately
replace us in the decision making system
like the, for a while, at least at the
top level, what humans gonna be doing
is deciding which goals to work on and
where, where to allocate your resources.
Including like, well, how, where should
I allocate my intelligence, in this
case, my artificial intelligence, and
where should I allocate my human and how
much should I trust one or the other?
Um.
As we move into that world, ca
Cassie can help you make that
transition seamlessly from today.
Human only decision making, um, to, if you
were using it tomorrow to human machine
decision making and tell you when the
right moment to machine move to machine
only decision making is and where, uh,
I don't think anyone else can do that.
So, um, starting with us now is building
for a world of a GI and if you, you
are also convinced that that world
is coming, it makes sense to start
working with us as early as possible.
Chris: Cool.
Well, um, yeah, I think that
closes the meat of the podcast.
Uh, Keith, thanks so much for coming
down and, and sharing your knowledge.
It's been been a real pleasure.
Um, ju just some closing
questions for, for fun.
Um, you know, what, what's kind of
one application for ai you're looking
forward to seeing the most over the
next, uh, six to, let's say 12 months?
Because it, it changes so quickly.
Keith Dear: Uh, I mean, look, I suppose
I tell you what I would, I, I often
yearn for the era of self-driving cars.
I know some people love driving.
I don't particularly love driving.
I think, well, plenty of friends you
might say, I'm not that good at it either,
but the, the amount of time we waste
driving places and blah, blah, blah.
So I think truly self-driving cars where
you can get in and get on, or you can get
in and read or you can get in and have a
conversation without, i, I self-driving
cars as and when they arrive, I think
will, will profoundly, uh, improve my, my
life and indeed that of all my passengers.
So there's one.
Chris: Nice.
And, um, you know what, what's one book
or paper, I know you've kind of mentioned
a few already, but what's one other paper
or book that's really challenged the way
you think as a person or, or as a leader?
Keith Dear: Well, deep utopia.
Um, I mentioned earlier, um, on Nick
Bostrom's book, I think, um, servant
Leadership by Robert Greenleaf that I
referenced earlier on, on leadership.
and of course I can't not recommend
super forecasting by Philip Tetlock.
'cause without that book, I
think probably no Cassie, so.
Chris: Oh, and um, fi, final question.
Um, I know off you, you.
You know, seems so excited
and passionate about Cassie.
Um, where do you hope it will be
in the next kind of two to five
years and what, you know, if, if
you could go in the future and
see what, what are you hoping for?
Keith Dear: we're growing super fast.
So next 12 months, we're
gonna lean into two things.
So one is commercial growth, onboarding
more customers, and expanding,
um, our offer to new industries.
Um, so I'm, I'm looking forward
to that and bringing on the team
that will enable me to do that at a
greater scale than we currently can.
Um, I'm super excited about kicking off
our r and d, so real deep tech r and d
that will start, um, really starting,
I mean, some of the team are working
on it now, but start properly, uh, from
December where we're hoping to have
hired 25 people by the end of the year
against really structured predetermined
areas that we wanna research.
And, yeah, I, I won't get into too
much in what we're doing this early,
but I, that, that really excites me.
It, it, it would change the
business and push it much further.
mean, three, three to five years.
I, um.
I genuinely want us in time
to be a competitor, to open
ai, deep minded anthropic.
Um, and, uh, and chasing that ambition,
I think gets me up outta bed in the
morning and motivates me to keep going.
Um, and then the thi the thing that
prior to all of this that, um, I, I've
enjoyed most in my career is working in
a really genuinely high performance team.
And I think it's my job.
You know, we talked about this in
the leadership sense to, to set up
the structure that enables really
capable people to thrive and succeed.
But being part of an environment like
that, uh, whether it was the team I
worked in in number 10, or, or a team
I worked in in Afghanistan a few years
back, um, various others along the
way, nothing more satisfying than,
than really pursuing a big audacious
goal or something that's meaningful
to you with really capable people.
So I, I, I'm excited about like both
the, the day-to-day part of that and
then the long-term ambitions and goals.
Chris: Nice.
And, uh, I think it would be amazing
to see a, a homegrown company compete
with, um, the likes of Anthropic Open ai.
And, and I know DeepMind has
origins in the uk, but now, um,
you know, a lot of it, their
operations in the us so, uh, yeah.
Rooting for you, Keith.
Um, thank you so much for, thank
you so much for coming on today.
I, I really appreciate your time and,
um, yeah, it was a great episode.
Keith Dear: thank you, thank you to
anyone who's, uh, who's stuck with us
to the end of the podcast, and, uh,
yeah, looking forward to working with
you in the months and years ahead.
Something that I should have said, Chris.
I, uh, I'm speaking here purely in
a personal capacity, so my opinions
don't reflect those of, you know,
uh, the MOD given my reserve service.
Um, or indeed anyone else,
they're just my views.
Um, on, on, on, uh, you know,
where AI is and where it's going
and what it means for the world.
Chris: Thank you.