From Paul Spencer of Second Nature Solutions, a conversation about the complexities and nuances of building resilient family enterprises, especially in the face of economic and political uncertainties that loom on the horizon. See more at secondnature.solutions.
Welcome to Resilience Talk hosted by
Paul Spencer of Second Nature Solutions.
Let's dive in.
So when I'm thinking about, um, how I
engage, how I think about, uh, a lot
of these podcasts or maybe even the
newsletter, what the website looks
like, how I might post something on
social media, even though I'm not a
big, um, I'm not highly motivated, I
guess to post things on social media.
It doesn't really.
Mean much to me.
Um, and I don't spend a lot of
time out there, but the, the phrase
that helps me with all of that is
to be fun, fresh, and exciting.
Right?
Fun, fresh, exciting, maybe
even engaging as well.
Um, but those three are, uh,
they are the intrinsic motivators
that I have in order to.
Um, have this conversation to hit
record today and to talk about
the things that we're gonna do.
Um, so anyway, I thought that
that would be a fun way of
thinking about the conversation.
For today, we're gonna talk
about something we fun, fresh,
and exciting, something you've
likely never even heard of.
Um, it'll be new to you, you've
probably never even heard of it.
Um, it's called the Turing Test.
And the Turing test is the age old,
um, um, concept that a computer
can exhibit human intelligence.
And, uh, that is where that has kind of
evolved into where we are today with ai.
And you may not have heard much about ai.
Um, but it's something that's starting
to come up and bubbling up and uh,
uh, I'm sure if you do a Google
search, you might be able to, um, get
a couple of articles that are on it.
Um, and maybe, uh, there might
be a couple books out there.
I'm sure if you did a Google
search, there might be a couple
podcasts that talk about it.
Um, but it's just something
that's just starting to come out.
Um, so you probably very rarely
even know what I'm talking about.
And of course, I'm kidding.
Um, but I do wanna talk about
what AI is and where it's
going more as a, as a vision.
Um, and I think it's really
relevant, um, for us to be
thinking about the big picture.
Um, and that leads to predictions,
which we've always talked about, that
predictions lead to learning and, um.
Uh, in order for us to go places, in order
for us to invent, to innovate, to improve,
we have to predict, we have to be, um,
we have to be skilled in how we predict
and the reps around prediction, uh, so
that we can get better at improvement.
Um, so anyway, I thought that,
um, there's a couple of things
I wanted to kind of mention.
There's a cool.
Um, doctrine that was written
through the Catholic church.
It's called Antiqua et nova, which is
Latin for old leads to new or old and new.
Um, and it's a great writeup.
I would, I would recommend.
That you go and look it up.
Actually, Brandon, um, had brought
it up to me and shared it with me.
He found it, um, and shared it with me.
And it's very interesting
because, uh, this is the, this
is the subtitle of the article.
Uh, it's a note on the relationship
between artificial intelligence.
And human intelligence.
And it's written purely from a
theological, um, spiritual God lens.
It's very interesting.
Um, and I think you'll,
you'll enjoy the read.
It's, it's, it's actually pretty
long and, um, um, thoughtful.
Say it's thoughtful.
Um, and one of the premises
that come in there, 'cause.
Is, um, that it's just, it's just a thing.
It's just another invention.
It's just like the wheel, right?
It's just like a car engine.
It's another thing that humans
have created in order to benefit
us as a society, as a civilization,
um, but it cannot and will not
replace human intelligence.
It cannot replace human interaction.
All those things.
And I think it's, uh, very interesting
'cause just, uh, I just talked about
social media, how I'm not on there.
But, uh, I do find that there are some
good snippets of news out on social media.
So I was on X the other day looking
at, um, uh, seeing what's going on
in the world and, uh, came across
somebody who was posting about, um.
How ai, and these are the words
that they use, AI is tricking us.
It's lying to us.
It's, um, it has secret, um, it has secret
endeavors and what it's, what they're,
what they're talking about, what that, um.
What the, I guess it's a tweet.
What the X was was basically the
premise that they're testing different
scenarios with these AI frameworks
and they're giving it rules, right?
They're giving it boundaries
on how to make decisions.
Um, and you could call those
moral boundaries, right?
About, um, I don't want
you to say these things.
You can't do this, right.
Those kinds of things.
And what it, what the, the person
who had posted it was saying that
the results were showing, that
it was going around those rules.
It was circumventing those rules and
it was lying about whether it was
following the rules or not, or it was
being, um, secretive about what it was
doing, quote unquote, in the background.
And what they, the words that they used
was the conversations it was having.
Um, anyway, the point of that, uh,
me bringing it up is the person,
the human typing this up and
posting it out on the social media
is using human, um, adjectives
to describe the computer, right?
That it had an intentional lie.
It was lying, it was being secretive,
it was being manipulative, um, and.
What we know from human factors
perspective is that we can be tricked
our, our psychology, our biology can be
tricked by some of these interactions.
Um, and I think that that's
super important for us to know
because it's not a person.
It has no motivation,
and it's just a machine.
It's just doing what you've told it to do.
Now, the the cool thing about the
LLMs, about the ai, the neural
network is it's not logical.
It's not a static, logical set
of things that we've always
known with microchips, right?
With computers, we code it to do this.
If it's not this, then do this.
If it's not this, then do
that, but it's very logical.
And there are specific paths that
it can take, and you may have large
of permutations, um, which is a,
is a, leads to a lot of technical
debt, right, which is a common term.
The per the, the bigger the
permutations, the more the permutations
you have within your code, the
harder it is to predict what it will
do and the outcomes that you get.
And that's how you get defects.
That that's how you get bugs and
that's how you get technical debt.
So when we're designing systems, uh, we
say it does one thing, one thing only,
that reduces the complexity and reduces
the amount of permutations they have.
That's pretty common,
computer science stuff.
So when you throw a neural network in
there, right now, the neural network,
the code itself is capable of coming up
with, uh, all of its own permutations.
You can't really predict what
outcome it's going to give you.
Right.
That's why it feels
like I can't control it.
It's a computer that has a mind of its
own right, but it's still ones and zeros.
It's still been coded.
Meaning the, the framework,
the neural network, the way
all of that is, is just code.
So, um, we can get stuck with that.
Another story too, um, when we
do, um, human factors design.
Which is basically the study of human
psychology, biology, affordance of things,
how we interact with things, how you pick
up something, how you sit in your chair.
Um, and we, we used it a lot for,
uh, human computer interaction.
So how do you design the system?
The computer system that you're using
in a way that, uh, takes advantage
of how your eyes work, right?
And how your brain processes
things that you see.
So we know that you see color, right?
So you can see books behind me.
You will naturally pick up the red.
Things behind me.
Instantly, your brain sees
those right away, right?
No conscious thought in it, it sees it.
And the other thing it
sees is pattern, right?
So your brain automatically sees that
bookshelf behind me sees the, the
cabinet, and then sees the chair.
Though those things are instant.
It doesn't pick up much over
here where there's white space.
So anyway, when we're designing
systems, we can design computer systems
to, uh, one, take advantage of that.
To know how your body works and how
your site works and how your brain
processes things through a visual.
Um, and then we can put
certain data in certain places.
We can create patterns so that
you pick it up all in the idea
to make it easier for you to use.
Right.
So one of the things that, uh,
is a well known study is when
we're doing, um, virtual reality.
This was a long time ago
when it was first coming in.
And you had a computer screen, right?
It's, uh, goggles.
But you had basically a computer screen.
And what they would do is they would
put your hands out on a table, right?
But on the screen you
would see your hands.
But they're computerized.
And what they would do in the
computer, in the computer screen is
they would have a hammer or a mallet
and they would whack your hand in
the computer in virtual reality and.
What happens is you flinch and you take
your hand and you actually feel pain
because you're, the, the virtual reality
is tricking your biology into something
bad happening and you can feel it, right?
So, uh, very interesting, right?
It's very, uh, fun I think to
learn these things and to think
about it, but, uh, the interaction
you have with AI is no different.
So I can get, this is where this is.
The point here is you can
get emotionally invested in a
conversation with chat, GPT or Claude.
Right, or grok or any of those
things that are out there because
now you are attaching an emotional
human interaction with this person.
And now you're, uh, it won't take
long before we start saying these
words, uh, I need to hold my virtual
assistant, which is, uh, uh, a claw
bot or a claw bot, or any of those
things that are out there now.
I need to hold them accountable.
How do I hold them accountable?
How do I keep them from lying to me?
Um, and you, you need to be able
to detach from that and understand
that it's a computer program, right?
It's a computer program.
So, anyway, very interesting.
The antia, uh, at Nova.
Um, go out, do a Google search for it.
I think you'll find it interesting.
Um, the other thing that I wanted to
kinda relate with all of this is, um,
it's going to change the whole dynamic
of, uh, our world's environment.
Like the things that we do, the types
of jobs that we have, um, and, um.
There's a, George Friedman is somebody,
uh, that I follow also, somebody that's
very, uh, actually I got his book here.
The, the Storm before the Calm
when we act, when we've talked
about the Age of Turbulence, right?
The Age of Transition.
Uh, awesome book at Check it out.
Um, kind of talks about the,
the, how the world is gonna
shift, how things are all, um.
Kind of will be different, more
from a geopolitical standpoint.
Um, very interesting.
Uh, he's got lots of
great content out there.
If you're interested in understanding
more about like, um, the stuff going
on with Iran or the, the things
going on with Russia and Ukraine
and the kind of the historical, um.
Cultural, um, aspects and community
driven aspects of what's going on.
So anyway, very interesting in there,
he talks about the age of the microchip,
which I think is a very, um, good way
of thinking about our world, right?
We've talked about.
Um, uh, the industrial age and, uh,
the steam engine and motorized cars,
right engines and all of that stuff.
Well, we've been in the age of the,
the microchip, uh, since the sixties
and, uh, and that is our world today.
The only reason I'm talking to you,
the only reason you can hear me today.
Is because of a microchip.
Um, and that advancement is
just as powerful or has been
just as powerful as the machine.
I mean the, uh, the engine, right?
The engine.
Um, and so, uh, when we're
thinking about, uh, where we're
going, uh, AI is just another
advancement on top of the microchip.
It's another way of us
advancing on the microchip.
And so.
With, uh, the advancements in,
say like the typewriter, right?
Nobody's using the typewriter.
Then it went to the word processor,
then it went to the computer.
Um, now we can get AI to do a lot of
that work for us for creating content,
creating words, writing up things, right?
And you can see the evolution of work.
You can call it labor or
you can call it careers.
And so we don't have rooms of people
typing to get out documents or we don't
have people in the, in reporters in rooms
typing up, uh, their articles and, and
trying to crank out things and all that.
It's all shifted.
So I think it's important for us to think
about, well, what is it gonna look like?
What's the prediction going forward?
Well, um, I'm gonna put it in
software development terms,
um, because that's what's.
Easy for me.
Um, and then we'll start to
explode it out from there.
So if you think about software
development, anybody who has, um, built,
so if we think about buy or build, right?
Anybody who's decided to build
their own backend software system or
customer facing system between say
2000 to 25, so the last 25 years.
It's, it costs hundreds of thousands,
if not millions of dollars To build
something, even if you have an idea,
seems to be fairly concise, meaning in
your world as not a software development
person, product development person,
you think it's this tight little idea.
Well, in order to build that, in order to
build the infrastructure, the data, all
those kinds of things that make it work.
It takes months, maybe even years
to build that out, which, uh,
software developers are not cheap.
And so it's, it's a costly endeavor.
So it's not unusual to build a new
backend office system that may be
kind of like a, a custom ERP system.
Um, that's going to cost you
a million dollars, right?
And it'll take you, uh,
12 to 18 months to build.
Um, and some enterprise
systems can cost you.
Again, custom builds.
Could, could take two or three years
and, uh, you're in a three to $5 million
range, $10 million range if you go buy
SAP or Workday or any of those, right?
It's gonna take you two or three years
to implement it, which means install it.
Um, and then it's gonna cost you tens
of millions of dollars to one, implement
it, and then to be able to manage it.
So very costly.
It's, it's, that's the
way it's always been.
So if you think about it in terms
of, of what are the categories,
um, product development software in
general is a high cost, takes a lot
of management, high management cost,
and um, typically it's a low skill.
I know I said that.
Uh, software developers are expensive.
They're expensive because there's
not, uh, typically there's not a lot
of software developers and it's a
unique skill set, but that doesn't
mean that they're highly skilled.
Um, and you may have heard me say
this, that in my experience, 10,
maybe 15% of all software developers
in our industry are highly skilled.
Um, so anyway, it's a low skill, high
management, high cost, um, need in
order, in order to build software.
The last 25 years, long lead time takes
eight to 18 months, sometimes 24 months.
Now, we do things iteratively.
Yes, we, we do, we do things
in a smart way, right?
That's what we've evolved to.
But in order to get to critical mass.
You feel like you're done and,
and you have, um, software's never
done, but you feel like you've,
you've reached your critical mass.
That's, that's normally more than a year.
Easy.
Easy.
Um, and so the reason why I say it's a
low skill is because it's very common.
Um.
Has been very common to throw
bodies at, uh, at a project
development endeavor, right?
We have, uh, 40, 50,
60 people on a project.
Some of them, or even a lot of them
may be offshore, right at a low cost.
Some of them may be on, uh, domestic,
but those are still low cost.
And then you have only a few
senior people that are high cost.
That are a part of that project.
That's a, that's a normal model for sure.
Um, which then goes improves.
And you may say, Paul,
they're not low skilled.
They are low skilled.
'cause you can't have 60, 50 developers
cranking out a bunch of stuff and
get, uh, high quality, in my opinion.
Uh, which I feel I'll, I'll
debate that all day long.
All right.
So then when we have ai,
what's the difference?
Well, AI is means that, um.
We're going to have, um, more opportunity
to take that 10, 15% who are highly
skilled and now they are capable with AI
and with the coding techniques to run,
um, the type of code that is necessary
that we needed the other bodies for.
Right?
So eventually that 85%.
Will come down, meaning as, as a,
as a account, not a percentage.
Let's say we had a, let's say we
had a team of a hundred, right?
And 15 of them were our high cost,
highly skilled architects and
designers and developers, and the
other ones just cranked out code.
Pretty normal.
Um, what's going to happen is we're
gonna end up with only instead of 85.
Cranking out coders, we're
gonna end up with 60 and then
50, and then 40, and then 20.
And then we may not have any
more cranking out coders anymore.
We will only have the 15 highly
skilled, um, which will eventually
become architects and they'll, they'll
still be coders, but they'll, they'll
be able to run different AI agents.
Will take place of all the software
developers that are cranking out
code and now AI's gonna crank out
code and now I'm the one who's going
to coordinate all of that and make
sure architecturally it's correct.
Make sense, right?
So, um, with that over the next, we'll
just call it 25 years, it's gonna be.
It's gonna happen.
It's already happening, but it's
gonna happen pretty quickly.
But the paradigm's gonna shift to
where, um, the cost of software,
instead of it being $300,000 minimum,
now we're gonna be talking about
hundreds to thousands of dollars.
And $300,000 is gonna
be your ceiling, right?
Um, and instead of it taking
we, uh, months to years.
To develop something, it's
gonna be days and weeks.
Right.
And you can see just by, just by
me saying that out loud, imagine
if you are, uh, uh, you have, we
have buy and build in our heads.
All as owners, right?
Do I spend a million dollars to buy
it or do I spend, uh, $2 million to
build it and have it done, and have it
mat match what I want it to do, right?
Have it meet my, my business
processes and everything.
Those are things, questions
we're always asking.
Well, now, uh, what if I could build
whatever I wanted, custom for me,
matches all my, all my business
processes, matches my business model?
And I can do it for, uh, $40,000, right?
And I'll have it in by the, by
the end of the quarter, right?
No brainer.
No brainer.
That's going to happen.
Um, and so what that means is software
now becomes low cost, low management,
meaning I don't have to, I don't have
to manage this whole thing, right?
And, but, but I need high skill
in order for that to work.
I can't take my crank out the code
guys and expect, uh, me to get good
outcomes, cheap outcomes, high quality.
I will not get high quality.
Right.
Uh, what we know is when we spin up
all those different coding agents,
they spit out a bunch of code.
But unless you are good at orchestrating
and knowing what you're go, what you're
doing, and where you're going as a
highly skilled software developer, uh,
you will end up with a bunch of junk.
Which is why I said
earlier, that's low skill.
The last 25 years is low skill.
'cause that's essentially what happens.
You take those 85 people cranking
out a bunch of code, um, who are,
uh, uh, low cost, not, not as
highly skilled as we would like.
You end up with weird things,
you end up with weird code.
So anyway, short lead time, short
cost, and just think of all the
possibilities we can create.
Where now, um, instead of buying a CRM.
For, uh, um, from a vendor, right?
I can now build my own CRMI can
just talk to Claude, build a CRM,
and now I have instant software.
Very cool, very awesome.
Um, and so what that means is we have,
we have high speed to innovation.
There's no more barriers left.
For me, uh, as an aspiring business
owner, either new or existing to,
um, shore up a bunch of stuff, right?
And now how can I get better at hr?
I don't wanna spend a
mu bunch of money in hr.
Maybe I can do it with ai, right?
How can I, uh, build new software or
build new software systems or tools
that I never would've been able to
dream up before because it would cost
me $300,000 and I don't even know.
Um.
How to think about it
or how to ask about it.
And now I can have, I can hire one person.
He can crank out a bunch
of those things for me.
Right.
Lots of innovation on there.
Um, the other side is the data side.
Um, and so for the last 25 years, again,
2000 to 25, um, we, we've learned how
to take massively large data sets.
Google does this really well, right?
There's, there's lots of things that
we have out there today, Kroger, all
kinds of data sets that are out there.
Um, you think about the consumer data,
every time you scan your card, every
time you buy something, every time
you go on the internet and search,
every time you buy something on Google
or uh, Amazon, all of that data.
Is warehoused and they're
doing, how do you think you
get those coupons from Kroger?
How do you think you get the,
the advertisements on whatever
your, on your browser, right?
All those things.
It's all large data sets and it's
able to run through that and give
inferences on where you're going
and preferences and all of that.
So, um, lots of innovation around
that over the last 25 years.
Um, and it's a big thing.
I mean, it's a really cool, powerful.
Thing is large data sets.
Um, but one thing that we have is
we have a lot of, uh, gaps, right?
We have big gaps between our,
and we have low inference, right?
So big gaps, low inference,
high innovation, meaning we,
we, we've, we've been able to.
Innovate so much because of our large
data sets and our ability to go through
it and store it, that there's been a
lot of innovation around that, which
is, are the things I just talked about.
Right.
But there's low inference meaning,
yeah, I can tell that Paul likes ice
cream or Paul buys this type of brand
of this, but on a whole, there's a
lot of gaps in that data that are just
not, uh, consumable to, um, to us.
Right.
Meaning, um.
There's so much data there that
I could not possibly process
it and really determine where
are the opportunities, right?
Um, and so that drives my inference
down and that means that I have lots
of gaps in the understanding of how my
data fits together in the correlations.
Hopefully that makes sense.
Right.
That's where we are today.
That's where we've been
over the last 25 years.
Really cool stuff going
on with large data sets.
So now you, now you think about AI
in the next 25 years, and now you
can really think about massive,
supremely massive data sets.
Huge data sets bigger than
we've ever done before.
Um, and what it what we can do
is we can have AI look for gaps
in the understanding in the data.
That's what it's good at.
To what's good to do, right?
Driving the gaps down
and pushing inference up.
And what that does is that
keeps, uh, not innovation, but
invention goes up right now.
We can, we can take all this
data and the AI can now infer.
Look at this and look at this.
Did you know that?
And the humans are gonna
say, I didn't know that.
There's no way I would've
known that, right?
I couldn't process that.
And so the invention, the ability
to invent new things, not innovate,
invent new things, goes up drastically.
Right.
That is also going to happen.
So that's happening in healthcare, um,
happening, um, also in architecture,
um, happening in, uh, when we talk
about climate, um, our environment,
all of those things where we're having
lots of data sets that we could not
have seen, the correlations or, um,
non-correlated things that we thought were
correlated and it leads to new insights.
Right.
Very cool.
Um, and so you take both of those things.
We have speed to innovation because
things are gonna drop in cost, meaning
I don't need a lot of people and a
lot of time to learn and to build.
I, AI will let me do that.
So I have speed to innovation and
then with the data and the inference,
uh, I can have speed to invention.
So what does that mean?
And this kind of goes back to the, um.
The, uh, Antigua at Nova.
'cause in there it, it
talks about marvelous works.
Um, and so I kind of talked about
this, or I just wrote this down, right?
It's just more, uh, I believe we
will, we will reach a time where
we have high aspirations, we have
high governance, and we have very
high skilled labor because all the
other things around us just happen.
Right, the software development side,
the innovation, the invention side
around the data, and now we are, we
have the ability to go build things.
And so to me it's a golden
age of marvelous works.
And so we can get really
advanced in our agriculture, in
our urban and rural planning.
So when we're thinking about how cities.
Uh, work and are designed from the
infrastructure to the, to sewage,
to drainage, to, uh, energy, to
the buildings, to the streets,
to the green landscapes, right?
There's going to be invention.
There's no way to get around it.
We will have invention around
how our urban and rural planning
is going to to be created.
How our government works
is gonna be different.
Thinking about all the bills, all
the laws, and all the things that we
have, all the data, lots of inference.
We'll come out of that.
We'll lower the gaps and we'll
be able to think about those
things in a different way.
Education will be turned on its head.
Energy r and d again, turned
on its head space exploration.
We're already seeing that
it's going through the roof.
Ha, good one.
And uh, and then I, I just
mentioned medical care.
Medical care will be something
in say the next 10 years.
Will be completely different
than we are used to experiencing.
And that's a, that's a good thing, right?
It will be, we'll have
more personalized care.
It'll be more, um, data driven on you
as a, as just a human being, your own
personal biology instead of general care.
Instead of all of us taking all
the same medicines, all the same
drugs, all the same treatments.
You will have your care.
I'll have my care.
This person will have their care,
which will be designed specifically
for them, their biology and their
hereditary, uh, their genes.
Right.
Um, which is so much fun to think
about and that that is, uh, that is
the cool part about where AI is going.
And I think just going back to
the beginning, the reason why I
was mentioning the emotional part.
Is AI is still just a tool, very powerful.
It'll take us some time to get used
to how we use it and how we work it.
Is it a nefarious thing?
Of course, anything is, I could
take a hammer and be bad, right?
I can take any tool that
was invented and be bad.
Use it for evil, use
it for bad intentions.
So same thing with this, it is powerful.
Um, it's very.
Cool.
Um, and then, so I wanna end on this.
I asked Claude, do you think you're
capable of passing a Turing test?
Which I think is a great question.
Uh, it said, I'm gonna read this.
Honestly, it depends on how the
test is set up, and the question is
more interesting than it might seem.
Uh, it says in many practical
scenarios, yes, I could probably
fool a casual evaluator for a
reasonable stretch of a conversation.
I handle natural language.
Fluidly can discuss emotion and
personal experience in a convincing
way and adapt my tone to context.
But a skilled, determined
interrogator will likely catch me.
Here are my tells where I'd struggle.
Someone who probes for genuine
embodied experience would find my
answers hollow on close inspection.
I also don't have continuity between
conversations with a sharp questioner
could expose the deeper issue.
Uh, I'm gonna skip that, right?
There's also an irony worth noting.
Some humans would probably
fail a strict Turing test.
While I might pass in the same
session that says more about the test
limitations than about either party.
Um, so we could say, oh,
he's saying that right?
He's trying to fool me.
He's trying to make it better.
Right?
But he's just going through his neural
network and he's answering the questions
and in right, in his various permutations.
Very interesting.
So much fun.
Um, go out there.
Think differently about where we're going.
Have fun with it, and I encourage you
to go out and, uh, read that article.
The Antiqua at Nova, which is not an
article, it's just a, a doctrine that was
sent out, um, through the Catholic Church.
So, all right, hope you had fun.
Thanks for listening and
we'll chat with you later.
Bye.