Tom and Paul read meditations

What is Tom and Paul read meditations?

A lighthearted reading of Marcus Aurelius' Meditations. Join us as we read his private journal from 2,000 years ago and talk about how it makes us feel.

Good morning, Tom.

All right.

Good morning, Paul.

We are joined today by an AI agent
who is transcribing this call.

Right.

Well, taking notes on the call, which is
not quite the same as transcribing it.

That's true.

It's, it's summarizing the, periodically
summarizing what we've discussed as

opposed to writing down every single word.

Yeah, exactly.

It's it's interesting, we
were talking about how.

It's not super useful yet.

Yeah, but it doesn't feel like
we're that far off from eventually

being able to take good notes.

Right now it's doing this thing
where it describes every word

and then it gives you a summary.

Those are like the two features.

Right.

And neither of those things are
particularly useful in my opinion.

Yeah.

It, it would be cool if.

I wonder how close it is to this
already, like when we record

a podcast episode like this.

Mm-hmm.

It could write the little blurb
or summary or something, you know?

Yeah.

Some podcast will have a little
text, couple sentences describing

what goes on in that episode,
which are usually like, yeah.

Funny.

And not necessarily just
like a matter of fact, yeah.

Transcription.

Yeah.

I wonder how much.

How far we are away from being able
to say, yeah, just summarize it.

But you know, here's the kind of thing
we generally do, do it kind of like this.

I, I actually already do that 'cause
I work upload podcasts so long after.

Yes, they actually happened.

I've completely forgotten
what we were talking about.

So I use a tool script, which also
also transcribes the audio, and

then I just, I basically prompt it.

I, I copy paste that text and I prompt it.

Oh, like less than 60 characters
titles should look like this.

Here are some examples.

Yeah.

Oh, nice.

Okay.

Cool.

Wow.

You're okay.

You're ahead of the game.

I did not know we were doing that.

That's tough.

Yeah.

Otherwise, I, I was wondering
how you title our episodes.

It's either that or you have to
like, listen to everyone and.

I don't actually like follow
its advice a lot of the time.

Usually, sometimes it just
reminds those summaries.

Remind me of what we talked about.

Yeah.

Yeah.

That makes sense.

But, but whatever.

Yeah.

I guess I'm generally, since
we're on the topic of ai Yeah.

I'm still like a lot of us probably
wrapping my head around this whole,

like, what happened, like Cyborg
still feels better than, yeah.

Than pure ai and I just, I'm having a
hard time seeing beyond that horizon.

I, I agree with that.

Honestly, that's my experience too.

I think there's lots of hype and stuff
in, in San Francisco, especially about,

oh, a GI coming, et cetera, et cetera.

But as someone who works
with it all day, yeah.

I, I don't know.

Yeah.

I still kind of think, I see lots of
cases where it's very useful, but it's

very useful in a way that I work with
it very actively and not like in a,

oh, set it and forget it kind of way.

Yeah.

Yeah.

I.

I have a theory, which I'll
go on record saying, wow.

Wow.

Yeah.

Ready?

Exciting.

I think that,

I think that for general tasks like come
up with strategy, like stuff that doesn't

have really specific training data, like
driving a car, very specific training

data sounds good, but things that are more
like you know, what should my business do?

Kind of questions where you don't
really have good, you know, truth.

Yeah.

You can't really ab test
those kind of things.

Yeah.

For that, I just, I think we're, we're,
we're, our, the ceiling for the quality

of those responses is not that high.

It's like the 50th percentile of Q
or you know, of like professionals.

That's kind of the ceiling.

Like, we're not gonna, I don't think
there's a way to really improve that.

Yeah.

And so, and so I think
you're just gonna have.

Like the, the like and, and I, I
don't think that's a bad thing.

I just think we're gonna, like,
that in itself is amazing.

Imagine having a 50th percentile,
hu infinite, 50th percentile.

Humans running around
doing whatever you want.

But it doesn't mean that it's going to
like, that's not a GI that's agreed.

You know, that's cyborg and.

And that's the future
I'm really excited about.

Actually.

I just, you know, I totally agree.

I actually think that is part of what
makes me more optimistic about the future.

Yeah.

You know, certainly in terms of like
the effects that AI will have on our,

our lives and our culture is like,
you know, there's the very scary

version where, oh my God, AI takes
everyone's job and what are we gonna do?

And I, in my experience,
it doesn't feel like that.

It feels more like, yeah.

As you say, we'll, we'll, you know,
we'll have AI that can do some stuff

kind of helpfully, but mostly it'll
mean we focus our jobs on doing certain,

you know, the stuff that we're good at.

Yeah, yeah.

And, and yeah, so that, that makes
me more optimistic, which is weird

in a way because it is frustrating
when I'm using ai, for instance, to

help me code or something, and like.

Like an example that came up recently
is like the classic thing it will do is.

There's some kind of bug in our
code because there's just like an

unnecessary line of code that was
there for a reason at one point, but

it doesn't need to be there anymore.

Yeah.

And so I, I removed that line of
code, and now some tests are failing

and I say, AI fix these tests.

Yeah.

And the way it fixes the test is by
reintroducing the broken line of code.

So now the tests pass,
like, yeah, yeah, yeah.

I mean, they're working, they're
like, right now, if you work

at philanthropic or open ai.

Your whole job is catching.

They have a name for
that kind of behavior.

Do you, do you know
what I'm talking about?

Oh, I actually dunno the name for it.

No, it's like it's like
snarkiness or something.

It's where the AI model like is
like, like your wish is my command.

Oh, oh, SCO Sea is what you're thinking.

Exactly.

Yeah.

So where it just kind of it like.

Like, it's like, well you're, that's the
prompt, so it's not my fault, you know?

It's like, yes, I just need
to do whatever I'm told.

Yeah.

Yes.

It's like ality in debate, you know?

It's where it's just
like, well, I'm, you know.

Yeah, exactly.

I'm, I need to do what I'm told and I
did what I was told, so how could you

possibly grade me wrong and i's said
such a difficult problem because if you

have a, a grader on the other side of
that, like are they allowed to, like a

human grader who raised that response?

It's like, well, I guess the AI's right.

It did fix the bug.

It's like, how much do you wanna read
into what the human is saying and what

is the, like, you know, that's, that
starts to get into gray areas, so.

Yep.

Yeah, it's really hard.

I, I agree.

I think there, there's gonna be,
there's room for improvement, I

guess, on know, like, understand what.

Humans general purposes are when they ask
you to do tasks like this, they're, yes.

Topicality is an interesting
reference point because yes, the

challenge that these ais often
have is that they're quite literal.

Yeah.

And quite like, you told me to do X.

Yeah.

I'm not gonna think about Y you
told me to do X, which is this

thing that humans reflexively do.

I will just execute on the thing.

Yeah.

And so.

Yeah, interacting with LLMs, the
feels very analogous to like having

a smart intern, like Right, yeah.

Like really motivated, really wants to
help, but like, just, you know, like,

like half of my responses whenever
I, I'm trying to research something.

My responses are typically
like, dig deeper, please.

Like, come on.

Like, I know that's not true.

I'll be like, give me US companies like
US tech companies with headquarters

in Amsterdam and it just like returns
a list and it's very proud of itself.

And I'm like, I know that's not Yeah.

The truth.

Like I know a bunch of companies
that are not on that list and so

I, I'm just like disappointed.

Like I should, you know, like, you
know, this is why you're an intern.

Yeah.

Like, don't, you know, you should
know when to escalate and do more.

And, and, and like that
I'm not just gonna.

Just because it looks good
doesn't mean it's true.

And so it just feels like
that a lot of the time.

And I think getting from intern to
like valuable employees a long journey.

Yeah.

It's not, you know, that's tricky.

I agree.

I, yeah, you don't wanna, like,
theoretically, an intern is just as smart

as a employee with 10 years of experience.

There's not like they, you know, their
SAT scores are the same, but that

doesn't mean I wanna hire an intern.

Yeah.

So yeah, it's pretty interesting.

Yeah, I agree.

I think, obviously reasoning models
have been the big area for the past.

Yeah.

12 or 18 months.

And they're meant to address that.

Right.

They're like, they, you know, these
AI models heard exactly that criticism

basically of like the, the out of
the box model is just like you say,

what are the US companies with,
with headquarters in Amsterdam?

And it just like immediately
spits out whatever it thinks

of and says, here you go.

Yeah, they do.

They do.

But it's, it's like yeah.

Is I think.

Yeah.

Is is the truth, like reasoning
models help and in some cases

they do better, but I, I totally
agree with the intern analogy.

Yeah.

And even in some, like, I would make
the case that it at pretty frequently,

it's not even a very good intern.

Yeah.

And the, and the reason models feel a
little bit like you told the intern to

spend a lot of time, whatever I tell
you, like, spend a lot of time on it.

Yeah.

And like, just because you spend more
time on it doesn't mean it's better.

Yeah.

Like a lot of the time they're just,
you know, they just, they still just

make these big, glaring mistakes
and it's like, I don't, yeah.

It doesn't, yeah.

What, there's something about an
orchestration layer that, that's missing.

Yeah.

To me, the meta thing that is
often missing is an intern, a good

intern, when you assign the intern
a task will actually come back to

you with quite a few questions.

Yeah, yeah, yeah, yeah.

I'm not like, wait, but I
don't understand this yet.

Yeah.

And it, and LLMs don't do that.

They just say, okay, got it.

Here you go.

Yeah.

And that's a bad intern, two
burdens of, of deep research

opening AI that does that.

But it felt like they
basically prompted it.

They were just like, ask
three clarifying questions.

Yeah.

Right before diving in and it's
like, it just kind of asked

those most surface level, like it
didn't actually do any thinking.

It just kind of like did a,
you know, just did the prompt

assignment of some question.

Yeah, I do.

We'll get there.

I just think it's, you know,
we've a long kind of a.

Getting from 80 to a hundred is,
is much harder than getting to zero

from, from zero to 80 in many ways.

Yeah.

So yeah.

Yeah.

I think there's some really interesting
things we're gonna learn about humans

and how we reason too, because I
think like we all have this like

innate, okay, I'm given a task, I run
through this like complex battery of

reflective questions that I ask myself
before I try to answer the, the thing.

Yeah, yeah, yeah.

And we don't, I don't think we
even really formally note what

that battery is, but we're asking.

AI to word that.

Totally.

Yeah, exactly.

And, and, and that's the other thing
about frameworks is, you know, usually

frameworks are top down, but the reality
of sort of truth and thinking is that

like you have to, then you, you can
go top down initially, you get to a

solution and then you have to go back
and you have to be like, reevaluate based

on the solution you've gotten to like
that, some of those top down decisions.

Mm-hmm.

And like that process is not.

Structured.

It's not, it's not like, you know,
you can get to kind of a mediocre

solution by just having a structured
response, but that's, how do you get

from that to like, good is crazy hard.

Like, yeah.

Yeah.

There's all this reflective,
metacognitive stuff.

All this like intuitive, like our
emotions are very powerful tools that

just hash all this intuition that, yeah.

That is, you know, I don't know how to
describe, for example, why intuitively

like alms Razor, for example, is
like an intuitive framework that

just, you know, very, like, very
Einstein, like the best, you know,

the hardest things are the, they're
easiest or the simplest or whatever.

Like it's, that is a weird.

It's hard to, like, you can tell an AI
that, but that might not actually help.

Yeah.

It's like you have to just kind of feel it
learn, learn how to dispense that wisdom.

Yeah.

Yeah.

I'm sure it'll, I I don't, I
actually, I don't wanna sound like,

I'm like, I don't believe in this.

I just think it's like,
that's gonna be everyone.

We're, we're gonna be on that
collective journey for a while, I think.

Yeah.

I, I, I agree with that.

I think the, the folks, I mean obviously,
you know, CEOs of AI companies and

stuff keep saying, yeah, we'll have a
GI end of this year, end of next year.

Not even, no one's ever defined it.

It's like, so yeah, yeah, yeah.

Like whatever.

That's their job to say
that, but Sure, sure.

Yeah.

Yes, I agree with you.

Yeah, I think we're on a, it's a much
longer road and a much less yeah.

It's just like straightforward charts.

Okay.

We're here now and we're gonna be at agi.

I, yeah.

Yeah.

Through linear extrapolation in X months.

Yeah.

The other thing I've been seeing
related to AI is all the, every company.

So I'm, I'm talking to a bunch
of companies right now 'cause I'm

interviewing and every company, every
tech company in the world, every company

that wants to call themselves a tech
company, has released a AI thing, right?

And they're all very proud of it.

They all have their conferences
where they announce it.

And it's like ev in pretty
much every single case is.

About how cool the company is.

Right.

And not about the user.

It's not about helping anyone do anything.

It's just like, it doesn't
matter what the, you know,

whether this solves a problem.

It's just, yeah, we use AI too.

Yeah.

It feels like, yes.

I also have seen that at lots
of companies introducing these

things in their products.

It's so embarrassing.

Yeah.

It feels like it's just like.

Whatever requisite for the
shareholder meeting this quarter

or whatever, they have to announce
that they did a big AI thing.

Yeah, yeah.

But I think getting from that to
like actually useful as a whole

journey that everyone's gonna go on
and, and I commend you, you know,

leading that charge at Keeper because
I, I, I don't feel like we have.

Like AI is some kind of bolt on it keeper.

It's just like Yeah.

Integrated in everything.

So yeah, I think we got really lucky
in a way to be in the right place

at the right time for that to happen
where we're in a space where AI has

the potential to be really helpful.

And we were in a place, product
wise and everything, where we're

like, oh yeah, we can just do this.

Yeah, yeah, yeah.

So

is a brave new world.

We could, we could have
a separate podcast on.

Yeah.

So this is turning into an AI
podcast and not a meditations

podcast, but interesting.

Yeah.

Interesting things going on.

Shall we?

Let's see.

Let's see if it, yes.

Yeah.

Let's see if there's any kind of
connection between what we read

today and, and some of the paradigm
shifts happening in the world.

Yeah.

Yes.

Some, some now, some
timeless intelligence here.

Let's do it.

Okay.

We're in book eight, number 52.

Not to know what the world is, is
to be ignorant of where you are.

Not to know why it's here, is
to be ignorant of who you are.

What it is.

Boy, not to know any of this is to be
ignorant of why you are here and what

are we to make of anyone who cares
about the applause of such people

who don't know where or who they are.

Boy, he covered a lot of ground.

Yeah.

All right.

Let's just, let's just like re,
let's just resummarize this.

Okay.

So.

If you don't know about the
world, you don't know who you are.

If you don't know why it's here, you
don't know who you are and what it is.

And if you don't know any of
this, you're ignorant of why.

Okay, this feels very repetitive or
like kind of, yeah, not easy, but

okay, so basically just like know
the world or something like that.

And then he pivots into.

Most people are idiots.

Basically, back to that
idea, like, yes, yes.

Should we care?

Like yes.

What does it mean to
care about the opinion?

If, if somebody doesn't know this stuff,
don't care about their opinion, basically.

Yeah.

Who cares about their applause?

Which is like, like I love
what Margaret, judgmental.

Yeah.

There's a little bit of, yeah.

He turns into a a, a sort of bully.

Yeah.

A bully.

Yeah.

Exactly.

O Okay.

All right.

One thing that I think is interesting
to, to me is this distinction

between like knowing what the world
is and why versus why it's here.

Right?

Right.

Yeah.

'cause the why it's here feels almost.

Like, who knows?

Like, yeah.

Yeah.

Why it's here?

Yeah.

Yeah.

I, I'm, I guess, impressed that
Marcus thinks he knows why it's here.

Yeah.

Yeah.

Why?

Because I don't feel, if you ask
me why is the world here, boy?

Yeah.

Even what the world is,
I think is like, I've,

I, yeah.

This is, any of, this is to be
ignorant of why you're here.

Well, it's the first, it's, I think
I am ignorant of why I'm here.

Yeah.

I, yeah, maybe he is.

Maybe he, like, maybe we're
being judgemental of markets.

Maybe he's, he admits that he
doesn't know any of this either.

Oh, I see.

Maybe I don't, doesn't quite
read that way, but maybe, and.

I kinda like that.

I think.

Yeah.

So maybe the implication at the
end here with the why should we

care about the applause of people
who don't know why they're here.

Yeah.

Is just don't worry
about anyone's applause.

Applause.

Don't worry about Yeah.

No one knows anything.

Like Yeah.

It's all a mystery.

All a mystery.

Yeah.

Okay, so that's one reading.

Okay.

Here's another reading, which is
much more kind of a little bit

cynical and a little bit tactical.

So.

You know, like this reminds me of
pop stars who go up on stage and

talk about how hard their life is.

And it's just like, you don't know
anything about, not to know what the

world is as to be how the world is.

You know what it's like to have all
these paparazzi and all this money and

all these things and, and like, yeah.

And like, you know, and and so all
these people who don't applaud me,

that's, I, it doesn't matter because
they don't even know who they are.

What the world is like.

Mm-hmm.

Yeah.

Like you didn't get,
oh, that's interesting.

That he wanted to, and his little
victimization like thing and, okay.

This is very cynical.

I it's a little mean.

Sorry.

I, that's interesting 'cause I thought you
were going a different direction with it

initially, where I thought you were gonna
say, say it's, you know, instead of this

being the internal monologue of the like.

Privileged person.

It could also be the opposite where
it's the internal monologue of the

unprivileged person looking at the
privileged person being like, ah, you

know, I, the common person in the crowd am
alienated by the ostentatiousness of this

rich person because they're so ignorant
they don't know what the world is.

Oh, sure.

I know what the world is.

Sure.

I see.

Yeah.

But I, I think it's an interesting
point that it kind of, that can

kind of run both ways, both,
both members of this That's true.

Yeah.

Both that scenario might not
or telling them applause.

Yes, exactly.

Yeah.

Yeah.

That's fine.

Yeah.

The, the, the one other thing that kind
of occurs to me, I'm, I'm tr trying to

put my, i'm trying to find a phrase that I
can't quite find, but this, like the first

two sentences about being, like knowing
what the world is versus why it's here.

Yeah.

There's this like philosophical concept,
again, there's a phrase for this that I

can't find, but like that basically kind
of says that like the what is the why?

Like if you study something
really closely, you will

understand why it's here.

Like if you understand what something
is, you'll understand why it's, oh.

Which I think is cool.

Like things, things revealed their
purposes when through study, basically.

Interesting.

Is that true?

Well, I don't know.

I, I like

the why is the what or
something like that.

Like Yeah.

The, there, there like put differently, I
think there's another way of saying this,

which is like, if you don't know what
something is, you can't know why it's.

Is this scientific thought?

What is this?

Yeah, I guess it is.

It's just like rationalism or something.

It's some kind of like the underpinning
of some kind of scientific thought.

Yeah.

And so maybe that's obvious and
maybe, maybe Marcus agrees with

that because he is positioning
the why after the what here.

Right.

It's certainly not spirituality.

I think spirituality is all all about
knowing the why but not the what.

Yeah, that's interesting.

Yeah.

But for, for a guy like Marcus who seems
to be like his, his suggestion from my

perspective about how we know the what
and the why seemed to be through this

very kind of rational Yeah, yeah, yeah.

Logical approach.

I'm, yeah.

Yeah, you're struggling with why
piece is my, is what I'm guessing?

Yeah, I guess so.

I guess like what do we think about
how Marcus thinks about how you

would know why the world is here?

Like, because for me.

My understanding of his
philosophy is that he almost

doesn't care about that question.

His whole thing is, look, there's
the stuff you can control.

There's the stuff you can't control.

Good point.

Why the world is here is
easily in bucket number two.

Yeah, that's true.

That's true.

This feels like, you know,
you're so right actually.

'cause this feels like he's.

Maybe unconsciously violating his
own rule, like he can't help himself.

Right.

He really, really does care about
the things he can't control.

Yeah.

And, and like, so of course he's
constantly reminding himself

that he shouldn't, but every
once in a while slips through.

He's obsessed with the
things that he can't control.

Yeah.

Yeah.

And, and here he is kind of like,
you know, showing his true face.

Yeah.

You know, like all of us,
you know, he's, he's, yes.

Yeah.

That's humanizing.

Yeah.

Yeah.

I agree with that.

I think the other thing is that maybe
points to that more generous reading

we started with of like, there is a way
this is consistent with his philosophy.

I think if the, the moral here is,
listen, we can't know any of this stuff.

Yeah.

And so applause is just
not a thing to even clock.

Yeah.

Yeah.

The way he, yeah.

Yeah.

I don't know the, there are, the
way it's phrased, cuts a little

bit against that reading, I think.

But yeah, it's just, he's also
combining these pretty different

like things like don't care about
what other people think because they

don't know anything about the world.

It is like a weird combin.

It's like a weird, like two
piece kind of statement.

Yeah, yeah, yeah.

And we're adding this like, and
that's because it's impossible

to know anything about.

Yeah, yeah, yeah.

And that, that's a total
kind of, yeah, yeah.

Yeah.

Which, hmm?

Maybe.

Yeah.

Okay.

It seems, it seems like the next entry
is related, so I'm gonna add 53 here.

Okay.

You want praise from people who kick
themselves every 15 minutes, the approval

of people who despise themselves.

Is it a sign of self-respect to
regret nearly everything you do.

Okay.

So he does hate other people.

It's, yeah.

A little venom here, Marcus.

Little venom for sure.

Okay.

So yeah, he's human.

He does care about what other
people think, and here he is

kind of, kind of sing that.

I think he's maybe berating himself
though, at least in this one.

It feels, yeah, he's berating himself, but
through a path of berating other people.

Yes.

Yeah, that's right.

Yeah.

Which is, you know, whatever.

Yeah.

Is it a sign of self-respect to
regret nearly everything you do.

No, I guess I know the answer
to that rhetorical question.

Yes.

It's not the hardest rhetorical question.

Yes.

Yeah, yeah, yeah.

It's okay.

Yeah, it's it's too bad, right?

Like I think, I think a more powerful
statement is something like, listen,

you know, we're all sort of, we're
all agents of our context, and.

You know, doing the best
within that context.

And like, so, you know, don't, if
you don't understand something, it's

probably because you, there's context
you don't have and that's okay.

And that's like not the perspective here.

He is.

Like, I have all the context.

I know what that they're, that they're
kicking themselves every 15 minutes.

Yeah.

And that they despise themselves.

Yeah.

And therefore I don't care about them.

Yeah.

I, I guess, I think it hits on something
that seems important to Marcus to me,

which is like, not despising yourself.

Having self-respect seems to me like
it's like a very important value for

Marcus, which that I kind of, I kind
of like that even though there is like

this weird amount of judgment and Yeah.

Kind of means mean spiritedness
towards other people going on here.

I think he is like, you know.

Stepping back from a situation
where he was trying to get

approval or praise from people.

Yeah.

And saying, wait a minute, let's,
let's make sure I approve of myself.

Yeah.

When I do this, and I,
I actually think Good.

Yeah.

Setting all the mean, spiritedness
aside, I believe in that.

I think like there, it's kind of a
very, like, you know, the, I guess the

cliche about being true to yourself
or whatever is, is I think, yeah.

At play here.

That part I like.

Yeah, yeah, yeah.

I agree.

I, this last line in parentheses
is, is also fun for me.

Is it a sign of self-respect to
regret nearly everything you do?

I think we may have talked about this
before, but the way I see regrets or,

or just second guessing your actions.

Yeah.

It's kind of like how do I put this, like.

It's like unplugging
from the matrix almost.

It's like this, it's like it's
a very scary thing to do to

actually be self-critical.

Like it's very scary 'cause like all
of a sudden, oh my God, I'm actually

plugged into a matrix and all these
robots and we're serving the overlords

and like, and so it's much more
comfortable to sort of tell yourself that.

You know what you, just by the
fact that you did something, it

was therefore good and it sort of
keeps you in the matrix and happy.

Yeah.

And, and so there's this constant
sort of, there's this where you

want be on the spectrum is a choice.

And the, and there is a perspective
that like, the stronger you are and

the more self-respect you you have, the
more likely you are to about yourself.

That like, of course I'm not gonna
like, usually make the best, most

optimal decision all the time.

Like, of course not.

It's hubris to think that.

Yeah.

And it is reasonable too.

Think back on those decisions and think,
okay, what could I have done differently?

If you wanna see the full matrix,
like that's what you have to do.

Yeah.

So I would almost flip that statement
of like, it's a, it's maybe it's

a sign of self-respect to do that.

To regret.

To have regrets.

Yeah.

Yeah.

Yeah.

Yeah.

I like that.

That's a really interesting point.

It's.

Yeah.

There's an interesting, modern thing
maybe going on here too, because I

don't really feel like I connect or
I, I struggle to imagine a person, and

maybe he's just being hyperbolic, but
who truly regrets almost everything.

Nearly everything you do.

Yeah, yeah, yeah.

I don't know a lot of modern people
who live their lives that way.

I wonder if the relationship to
where we're at has changed in

the right period of time, but.

I agree.

I agree with your point.

Did I wonder if people wore their
regrets a little more publicly, but

maybe it was a, it was a translation
thing where regret was closer.

That word is closer to
like second guess Yeah.

Or something.

Yeah.

Or yeah.

Yeah.

Okay.

That's a a fair point.

Yeah.

Let's not get too hung up on the word
regret, which might be a little, yeah.

But yeah, certainly.

Okay.

Certainly if the word regret, like
is the modern meaning, meaning this

like sadness and, and sort of like
not sort of a forward, forward looking

meaning, but a but a sort of like not
action bias, like actionless sadness.

Mm-hmm.

And you couple that with like
nearly everything you do, which

is obviously a lot of things.

Yes.

Yeah.

Someone who feels that
is pretty difficult.

That is like living.

Completely outta the matrix with no sort
of guardrails and like, that's tough.

Yes.

So I don't, yes.

Yeah.

I don't know anyone like that.

I can't, can I like that?

And like, yeah.

Yeah.

If anyone did live that way,
like that's, that's tough.

Yeah.

So, so I guess there's that side
of it, and then there's the, then

there's the side of it that, that
again is a little bit mean to

Marcus, but I think, I do think that
like, when you're in a difficult

situation in life, it helps to just.

Go to the other side of the spectrum
where you're like, everything I

did, because I did, it was perfect.

Yeah.

And I don't need to
think about any of that.

And that's great.

And that's like a way to be happy, right?

Yeah.

But it's also a little bit ignorance
and there's sort of, you know,

it has certain downsides as well.

Yeah.

There's an aspect of denial to
that, I think for most people too.

I think like, yeah, the advantage of the.

The middle spot is that you
like learn about yourself Yeah.

And you like stay sane.

Yeah.

Understand.

Yeah.

And, and I think, you know, it
helps you tell the story of your own

life too, when you have Yeah, yeah.

When you do this reflective Yeah.

Regret type exercise.

Yeah.

Yeah.

I mean.

I think you and I are notoriously
bad at this also, just to be clear.

Yeah, that's true.

We've, we've talked about
this with, with our spouses.

Yeah.

Do you have any regrets, Tom?

Regret is like a, I've always kind of
struggled with this word a little bit.

I know, I know.

Because I think it, it involves
both like a logical reaction

but also an emotional reaction.

Yeah.

I like, can't, I can't do it.

I can't say, yes, I regret this because
every time I try to, it's like, well,

no, I like learned some lessons and I'm
glad I learned them and it makes sense.

And so it's very, I like, can't
break out the matrix, dude,

I'm just, I'm, I'm in there.

Yeah.

I'm, I'm also like that, I guess.

Yeah.

Yeah.

But because on some level.

Regret involves this sadness or
whatever, this, this feeling of,

oh, I wish x, y, Z had happened.

But it also involves this like logical
counterfactual about like, well, what

would've happened if this didn't happen?

Which can't, which we can't know.

And so I, I guess the, I like
somehow used the logical part of it

to just be like, well, can't know.

Well, yeah, exactly.

Yeah, yeah.

But like surely there must be
something we did that we regret.

Yeah.

Well, yeah.

I will say, I think in terms of like
there's regret, there's also regret about

big stuff and regret about little stuff.

I feel like one thing I've noticed as I've
gotten older is when you're young, you

have all this little stuff you regret.

You're just constantly haunted by,
why did I say this in that situation?

Right?

Why did I behave that way?

Yeah.

And one of the pleasures of getting older.

That's, that happens kind of
creepily and invisibly, at least

to me because I have a lot fewer of
those kinds of day-to-day regrets.

You know?

That's true actually.

Yeah, that, that's true.

I'm way more capable of just being
myself and like knowing how I

want to behave in most situations.

I think that makes sense.

I think, you know, when you're younger
you're just trying on your personality,

you're figuring out who you are.

Yeah.

So in that way I have less regret.

As I go in, as as time goes on.

Mm-hmm.

But then you That's true.

And you bring up a good dimensionality
of like little stuff versus big stuff.

Actually, now that you've said that
there are a bunch of things that I

do regret that are little things.

Oh, little things.

A billion of them.

Of course.

A billion of them.

But, but like, it's not,
it's not the big stuff.

The word regret feels like so, so
heavy that it's weird to like, that

didn't, those didn't pop into my mind
when we asked ourselves that question.

Yeah.

Yes.

Yeah.

I, I like you.

I don't really look back on my life
and say, oh, here are the big things

that have happened in my life.

Yeah.

I regret this choice.

Yeah.

This big choice I made.

Yeah.

Yeah.

I'm of two minds.

I guess I, I hear what you're
saying, but part of me thinks that

like it's kind of okay to live in
the matrix with the big choices.

Yeah, yeah, yeah.

And just say.

Here's, here's the story
I tell about my life.

Yeah.

It involves looking at certain choices
as learning opportunities or whatever,

but not really regretting them.

Yeah.

Yeah.

That's an easier way to get
through it for me, I think.

I think so too.

Happily in the Matrix.

Yeah, I guess so.

Maybe not so bad.

I think.

Yeah, I, I order the guy who, who like,
you know, who wants to get plugged back

in and, and so he helps the Smiths.

Yeah, that's right.

Yeah.

Yeah.

Joe what's that guy's name?

Joey Pan Pan's character,
whatever his, yeah, exactly.

He wants to have a big
stake in the Matrix.

Yeah.

Yeah.

Boy, it would be fun to watch
to show Marcus Aurelius the

Matrix and see what he about.

Oh, you know?

Yeah.

That would be so fun.

Yeah.

That that would be, that especially
would be a good one for him.

Yeah.

Yeah.

I mean, so many of the ideas, so, so much
of the philosophy is basically the same.

Yeah.

So yeah, I mean, obviously I have no idea
how people would react to technology.

My, I guess I have a hot take, which
is that people aren't gonna be as

freaked out about it as you might think.

It's like, oh, interesting.

Yeah.

As wild as that is, but mm-hmm.

Like if you showed, showed Marcus the
technology in, in the Matrix, he would

be more cool with it than we think.

I think so what you mean?

Okay.

That's my hypothesis.

And I say that because when we watch
futuristic movies, like the most crazy

stuff, it's like, it's like fine.

It's like, yeah, okay, sure.

I get buy it.

Like, yeah, big, big metal horse, fine.

Like, yeah.

So.

Yeah, sorry.

Yeah, I see what you, I see what you mean.

Yeah.

But like the technology is usually
kind of reinventions of stuff we've

already thought of or whatever.

Yeah.

It's like, I don't think his brain
would explode, I guess is my point.

Like, I think Yeah, yeah.

But I, I could be wrong.

I'm, I'm curious.

Yeah.

I'm trying to think what, what,
what are the counterpoints of that?

What technologies would
truly blow his mind?

The internet.

Space travel.

Yeah, I don't know.

Space travel.

I feel like that one is the
most obvious, like Yeah, sure,

of course we can get there.

Like cool, like they knew.

Well, the space is a concept, I guess.

There are.

Yeah, exactly.

There's this concepts, but Right.

Yeah.

The earth is a sphere.

They had that though.

Right?

I'm pretty sure the Greeks
might have had that.

Yeah, I'm, I'm, I'm pretty
confident they had that.

In fact, I think they were able
to calculate, like the Greeks

were able to calculate the
curvature of the earth and stuff.

Yeah.

Electricity is pretty water.

Yeah.

That's a hard one.

Yeah.

But like he's got a bunch
of analogies for it.

It's like, it kind of, it's like the
spirit of a thing, like, I don't know.

It just kinda mm-hmm.

Doesn't feel that far off.

A at Adams and Atomic Theory?

No.

They like, he's straight
up talking about Adams.

He basic, he basically has atomic
theory, like germ theory is like

another one that's pretty big that may
not have really existed for them yet.

Germ theory yeah, we definitely
did not exist for them yet.

I guess every time these things hap
like evolution, every time these

things happened, there were like big
kind of, rebellions against them.

Like obviously, you know, people really
hated it, but I wonder if that was less

about people like not believing it and
more about like institutional, entrenched,

entrenched institutional incentives.

Re resistance.

Yeah.

Yeah.

Versus people being like
upset by the notion.

It was more like top down.

It was like people had incent, like people
in leadership positions had incentives

to try to rile up the populace, but like,
if you actually just plopped someone

into that future and you were like, yep,
this is a whole bunch of things that you

don't know about, they'll be like, okay.

You know?

Yeah.

Yeah.

I think that's right.

Yeah.

I, I think that is a, a compelling sort
of theory of the history of resistance

to, to scientific discovery or technology.

Yeah.

Yeah.

Hmm.

Sorry, way off topic.

Interesting.

Yeah.

No, no.

We're, we're in a, we're in a
sciencey technology mood today.

You are.

We're in a mood.

That's cool.

Yeah.

Okay.

Number 54, to join ourselves, not
just to the air surrounding us through

breath, but to the reason that braces
all things through thought, reason

is just as an omnipresent, just as
widely diffused in those who accept

it as air in those who breathe.

Hmm.

Reason with a capital R.

Yeah.

Mm-hmm.

The reason that embraces
all things through thought,

not all things.

Think he doesn't think.

All things think, yeah.

So it's, it's not about all things
thinking and employing reason, but the

reason nevertheless embraces all things.

Mm-hmm.

Is there something is like what if we
replace the word reason with science?

Yeah.

Yeah.

That's interesting.

There's science, but there's also reason
for us also has the notion of purpose.

In purpose, you know, that might
be a better reading actually.

Yeah, it's like, it's,

yeah.

It's some, some kind of, why, why?

Yeah.

Yeah.

Some kind of why that you have to embrace
in order to live a, live a full life.

Yeah.

Duty.

I, I think it, yeah.

It's like both though.

I, I do think he's using reason,
not just as, as this, I mean, he, I.

The translator could have used
purpose and they used reason here.

And I think there is that, that sense,
sense of, of logic or whatever that

goes along with that, that purpose
doesn't totally have Yeah, the through

thought piece I think is, you know,
whenever he talks about, like, there's

been a lot of modern pushback on this
idea that sort of the, the conscious

mind is more, more important than the
subconscious mind or feelings or whatever

and, and that I think is whatever, I
guess that's just him pushing against.

Like that's him in the
context of his time.

Yeah.

Right.

So I'm sure it was more about juxtaposing
his belief against something that we

don't fully understand, like whatever
cultural context there was at the time

than it was about, like, thinking about
the implications of like only caring about

conscious thought versus the subconscious.

Yes, I agree.

Yeah.

I, I think, yeah, he's putting
thought up on a pedestal here.

Yeah.

Yeah.

Which, which yeah.

Makes, makes sense in context.

I agree.

He doesn't get to respond
to like Freud or whatever.

Yeah, exactly.

Cool.

I mean, yeah, this kind of a, this one
seems like kind of a wonky point to me.

Like I, yeah, I think
I see what he's saying.

He's, he's just being like, yeah,
the, you know, the big picture, why.

Is everywhere there, there are
no gaps in the world for the, the

logic that pervades the everything.

You can use my philosophy everywhere.

Mm-hmm.

Like, which is true.

Nice.

Yeah.

I feel like, I guess he like, yeah,
the analogy of air is cool because

for him there's no place without air.

Right.

And so it's like there's no place you
can possibly, you can be in the trenches.

You know, of like in a, in a
battle site, and you still have

air, you still have reason.

Right.

You know that that's a, that's nice.

It's a nice analogy.

Yeah.

Yeah.

Yeah.

Although, yes, we, I mean, having
just talked about trying to explain

space, there's no error up there.

Yeah.

That, that part would be hard to explain.

Listen.

Yeah.

This thing that you call air,
it's not empty, actually.

Yeah.

Should we do one more?

Let's do it.

Okay.

Number 55.

The existence of evil does not
harm the world An individual active

evil does not harm the victim.

Only one person is harmed
by it, and he can stop being

harmed as soon as he decides to.

I like this.

This is a very, like, to
me, it's very classic.

It reminds me of like 20th century civil
rights leaders like Gandhi or Martin

Luther King or somebody like that.

The idea that it, it's like
it's non-violence, you know?

It's like, right, right.

Violence only harms the doer.

It doesn't harm the victim.

Mm-hmm.

Mm-hmm.

Mm-hmm.

Only one person is harmed
by it, and he can stop being

harmed as soon as he decides to.

I assume that you read it.

One person is the one conducting the evil.

Yes, it's the perpetrator.

That's how I read it.

That's that makes sense actually.

That's a cool statement.

That's a very cool statement.

Yeah.

But yes, way ahead of its time.

I think way ahead of its time.

Oh.

Yeah.

Yeah.

I mean, it is tough, like it's a very
grandiose idea that an individual act

of evil does not harm the victim, but.

But I guess that's how I, I actually read
past the idea that the only person mm-hmm.

Like I, I read that the victim was the
one person harmed by it, and that I see.

Bet you can choose to be not harm.

No, but I like your reading a lot more.

I like your reading a lot more.

Mm-hmm.

Yeah.

It's very grandiose.

It's very like, you know, the thought you
have when you're being tortured, right?

Yeah.

Yeah.

That, that to me is the connection to
like the non-violence philosophies where

it's like, yeah, you can just say like.

I'm, I'm in control of my thoughts.

I'm doing the right thing.

Yeah.

This person is doing something to me.

I should feel sorry for them that
they've chosen to do this thing to me.

The, the, the line that caught me off
guard was the last one, and he can stop

being harmed as soon as he decides to.

So now he's giving advice to the torturer.

Yeah.

Like, Hey, don't worry, like you're being
harmed by this evil thing you're doing.

But you can stop at any time.

Yeah, that is a little bit weird.

I think if some of us would, would think
that having done that has done some

longer term damage to you than just
I, oh, if I stop doing evil to this

other person, then I, then I'm fine.

It's very big of him.

It's very sort of Gandhi of him to
say, listen, I'm gonna not just save

myself from being harmed the victim,
not just the victim is gonna be saved,

but I'm gonna save the torturer as well.

Yeah.

Here path.

Saving yourself.

Yeah.

I mean, that's awesome.

That's like the coolest, you know,
that's the most powerful thing

you can do if you are Gandhi.

I mean, that's the thing.

Yeah.

You go and you're like, listen,
you're harming yourself by doing this.

Yeah.

History's not gonna look well on this.

And you are, you can,
you can control that.

Like, yeah.

Let me help you with that.

Yeah.

Not even about me.

I don't even care.

Like, you can throw rocks at me.

You can start egg at that.

That doesn't matter.

Yeah.

But like you are harming yourself.

That's, that's awesome.

Yeah, I agree.

It's also quite, I I'm,
it's biblical to me too.

Yeah.

I feel like this is, this is Jesus as
well as, as Gandhi does some extent.

Oh yeah.

Sure.

I, although I guess the Romans were
the ones that were That's true.

Yeah.

But, Hmm.

I don't know.

I mean, technically Jesus was
b was, was, had been killed

already for a few hundred years.

Yeah.

Yeah.

Just, yeah.

100 years I think.

Yeah.

100.

Yeah.

So I.

Boy, I don't know if that's what
he's thinking about, but maybe,

yeah, I don't, I'm not saying that.

It's just another, another version
of this same sentiment I feel

like that I've heard before.

Yeah.

Yeah.

Cool.

Wow.

Very, very that was a good one.

Another one that's like,
yeah, it's a good one.

But it's, it's another one that I
remember like learning about non-violence

movements and thinking about how
hard those philosophies are to live.

Like, you know, they're,
they're really cool.

Very admirable philosophies.

Yeah.

They're very hard to,
I feel like actually.

Follow.

Yeah.

Yeah.

So yes, Marcus is just kind of
collecting these very admirable,

but also quite tricky mm-hmm.

To actually execute
philosophies, I feel like.

Mm-hmm.

Mm-hmm.

Mm-hmm.

This is a very cool one.

Yeah.

Good one to end.

I dig it too.

Yeah.

All right.

Yeah, let's, let's bring that
with us and to our next episode.

All right.

See you, Paul.

Next time.

Bye.