A podcast to disrupt common narratives and constructs to empower diverse communities. We provide inspirational content from entrepreneurs and leaders who are disrupting the status quo.
Welcome to Midwest 2023.
I'm Rob Richardson and I'm your host.
It's an honor to be here.
Honored to also work with BSL Group
who is teaming up with us
to do this live podcast.
As many of you know,
if you watch the podcast Disruption, now
we have conversations
that talk about disrupting common areas
and constructs that really challenge
how we think about things.
And there is not going to be any different
With me as my guest is Javier Viana.
If you believe
we can change the narrative,
if you believe
we can change our communities,
if you believe we can change the outcomes,
then we can change the world.
I'm Rob Richardson.
Welcome to Disruption Now.
But yeah is no different
from from the data that we have.
It's just a matter of advancing it in
a certain way and providing right outputs
for that for a particular task.
Right?
But it would be, if I can explain it,
this is a very common, humble, right.
It's like a box.
It thinks inside,
process them and it
and then I'll just get an output.
Okay. Out of it was cooking.
Look who's cooking. That's good.
That's the question. Yes.
So it's usually well,
you can host your A.I.
in cloud,
you can have it in your local piece.
It's usually a computer.
Write something, a program
that an executable program, an algorithm.
Right.
That is that is running through all
the different steps that you have there.
But it's just following the recipe,
like all these little things
that you do inside that that final rate.
And it's what you put
your passion is the pursuit of knowledge.
Make sure it sounds like knowledge
is shared emanated among humankind,
which I share that
and I believe we share the passion
for innovation hearing of that knowledge.
So it's I know for myself
and for you, there's a lot of excitement
about this new intrigue,
about artificial intelligence,
but there's also some concern.
So I'd like to talk to you
about what you are doing with reasons.
What problem is reason trying to solve.
Yeah, So before I get to that,
perhaps I can just give a brief overview.
Yeah.
What is reason
or even before reason I would say what,
what is the current state of it.
Yeah. What are we facing.
Because that would down
locate a little bit
for like right now we are, we are,
we're facing a global emergency.
I said this is a global emergency.
Yes. In terms of a yeah I think we are
we are reaching a point
where we are abusing of black box A.I..
We we call it it's opaque A.I.
that we don't have logs.
Yeah, it's like that.
Definitely. It's not transparent.
Like, it's it's basically
something that we
don't understand the way a certain
like is becoming a bigger problem
because the models we are using,
the example you used earlier cooking
we don't know
would be what's cooking or what
the materials are used to cook.
Exactly.
And like how these ingredients
are combined.
It's a great dish
at the end and it's beautiful.
It tastes as you want and everything,
but the way you reach that output,
slightly concerning because there is a lot
of mathematical operations
behind the scenes
and there's not a clear path to that.
And humans,
we are having a hard time understanding
or, you know, verify in these outputs
because when you start implementing A.I.
into important capabilities
and you start deciding whether what
what is the amount of oxygen
you are administering to a certain patient
in a hospital, in a
in an intensive care unit using on a Yeah.
And that patient dies for whatever reason.
Are you you want to know why?
You want to know why we're the inputs
that triggered this result and how right
and those things.
I think we are not taking care of them
right now.
Everybody's focused on performance,
everyone's focus on
how can you get the greatest, the most,
the most output in the quickest manner.
Yes, the most efficient manner. Yes.
Not the most ethical manner.
Not the most transparent manner.
That's true. Right. And
why do people I'm with.
You mean interrupt you?
No, no worries.
How do we get people to care about that?
Right.
We we are in a we're in a society.
And I'm not disagreeing with you
that it's important to I I'm with you.
Right. We're having this conversation.
But of course, we're in a society
that believes in fast results,
quick returns and more returns.
Yeah.
So how are we making the case that this is
this is something that you should pay
attention to?
Like, how are we making that case
given we know that balance,
we know we know the place.
What you said is absolutely true. Yeah.
It's essential to understand
what's going on.
But one way of well,
when when you are using A.I.
and you're starting
to do inference with the A.I., you can
you can have biased decision making a lot
and not just decision making,
but like the inference itself
could be biased in a certain way.
And what I'm trying to say
is that if we have explainability
in a system, we can detect bias.
For example, on things like that, right?
If we are not aware of what's happening
and there's implicit,
you know, bad things going on. Yes,
those are going to happen.
You know, the
the end user is going to suffer them.
And we might not see that suffering
right now in the short term.
But as we implement it more and more
and we started integrating
the AI in every single
day to the task,
I see we're going to see ourselves.
We were starting to see it,
you know, in and some time, you know,
social media was about to say,
I think we've see I mean to you
I was getting right to my
you must be must be on my way.
Oh here life
we have seen it like it's not we
we have seen the destructive nature
of what's an algorithm
that we don't understand, that
we don't know.
Yeah. Can affect us.
You know what?
What's obviously new is that it's now
exponentially able
to produce things at a level.
So, you know, with social media alone,
I've said this on my past
podcast, it literally
exacerbates the worst conditions
of human nature sometime, right?
Yeah, Yeah.
The vanity of ourselves, the
the the ability to be jealous,
to divide amongst one another.
Because what the algorithm is doing
is very simple.
With social media, their goal is not
their goal isn't enlightenment.
Their goal isn't to encourage
us to get together
in society, is not to inform
is it's it's it is
to get you engaged.
And so
the goal is to get clicks
and the goal is to trigger you.
And the goal is to keep you coming.
And they're going to repeat
what you want to see.
Which is it necessarily
which was doesn't necessarily align
with the truth, doesn't necessarily
align with what's even best for you,
but it keeps you entertained.
Yeah.
And people like that until they don't.
And we've seen what what's happened
with elections and things like that.
So you're right.
So what other problems can you foresee
not having transparent explain away.
I well it's many of them right
but especially when you start
connecting together and like
do you allow them to operate
based on the outputs of the previews
and keep building this chain?
Right.
This is called multipolarity.
And you can
you can phase an output
that has been processed in a lot of
different ways, but you have no absolutely
no idea how these output like.
It's literally like a being
we can think of a as a as a type
of living creature that is is not
it doesn't need oxygen like us.
Right.
It's like it's something that does
a lot of researchers that believes
that there's that level of consciousness
already achieved in many models.
But but the problem is that
what I'm back up with.
Back up.
You said a lot there if I can. Yeah.
And make sure we can explain this
to a way.
The people that I need to understand
like you've you've
because I know you're
deep into this research
and I want make sure
the audience understand.
So I like to use analogies
to make sure I understand.
You talked about multiple morality,
and I think that how I'm understanding
it is basically you have a lot
it's like raising a child, let's say,
and you put a lot into that child.
And you we don't know if there's
you're teaching that child bad habits
and it's compounding on itself
to create a new entity
that can cause a lot of damage
in the world. Right. And
it seems like that can be a good analogy
for what you're saying.
Like we don't know how the data
what's happening,
and that's going to create this entity
that's
going to create more problems for us
because we didn't
look look at it, look at it
on the front end to be transparent. Yes.
Yes, that's correct.
It kind of Yeah, yeah, yeah.
I I'm sure it's not a perfect analogy.
No, no, no.
But it explains it
very well. So. So no, no.
That we, we,
we can agree that there's this big issue.
Right.
And we can see it more and more every day.
Now, how do we solve it?
That's a question, right?
That's the question.
How can we you ask the question.
I mean, I'm interested
there's not clear not a clear answer.
And the reasons are various.
One of them is like, you know, it's
a very complex mathematical problem.
Sometimes the air is advancing very fast.
You always need to keep up to date
with the current technologies.
But one way to solve it
and the one we are working on at
reason is trying to study
what's going on inside a model of,
you know, you have your air.
We have think of it as the box.
It has been trained.
There's other people
that are trying to create new boxes
that are transparent from the ground up.
But that's difficult to scale
because everybody is using already.
Neural networks are very complex.
So there's the one, there's
the one solution.
So it's like it's
so important to understand the problem.
You're trying to solve the problem.
You're trying to solve is
how do you make a more transparent?
And you say the one way is
people are trying to create new models
with new models that are crazy.
Yeah, not crazy, but whatever.
Very challenging.
It's challenge
given that the infrastructure
is already there,
it's like trying to build a new nation.
That's what I did for my page.
For example, a lot of Academy Academy
people are working on that end as well,
but it takes a lot.
And industry
hasn't adopted any of these books, right?
Sometimes it takes a whole year to develop
something. Yeah, it's like building it.
I tell people like,
could I do a blockchain?
But as I said, there's similar stuff
in that
if you want to build a new blockchain,
you got to build a whole new highway
or a whole new nation.
And that's that's a lot to start from.
But you're saying integrating
with the current infrastructure.
I think whatever it's already Pre-Trained
Yeah, okay.
And try to make it more transparent.
Your technology would take
current technology
that's out there in A.I.
and make it transparent. So let's say
everyone's talking to that.
Getty Of course, as if that's
the only generative model there ever was.
It's that's, that's this way It is,
because that was the first
to get the consciousness of the public
and made it in a user friendly way
that theoretically would would
reason be able to pierce into that
the algorithm, if you will,
to be able to tell the public
this is actually what's going on
behind the black box theoretically.
Is that is that
is that what you're looking at?
That's the that's the ideal scenario.
We're going towards that
where we work right now is on the smaller
problems, the classical neural networks
that have been used in a variety
of applications
that, you know, the medical use case
that I was telling you about,
prediction of oxygen, for example,
and trying to explain what were the inputs
that triggered this result
and how each of the inputs contributed.
And from there, you can tell a lot.
You can tell whether, you know, one of
the inputs was contributed too much.
Perhaps you were biasing
the entire reasoning.
You can tell whether these combination
of inputs
in this particular way was leading to
a wrong
outcome or a less confident outcome
if that you can trust more the the
the output if you understand that logic.
Right.
And and we well we go towards that
direction of applying it
to do better and a general way.
But right now we are working on
like more classical problems.
You have a big neural network, huge,
sometimes millions of parameters
that's doing a great job
predicting for whatever you want right.
But but you don't understand why those
predictions are being made like that.
And and you want to have
that level of transparency.
And that's becoming
now a requirement in many
in many places in Europe, for example.
Well, the GDPR and the ACT are explain
what that is.
General data protection regulation.
It's a very popular regulatory
framework in Europe
for for practitioners because it's kind of
making us
design a Yeah, in a different way
if that makes it more transparent.
Whereas the US is like,
do whatever you want, right?
It is still
it's taken a while,
but I think there's going to be
more regulations coming into play over
the next year and we're going to see them.
I think they are necessary, right,
Especially when we start,
you know, using A.I.
for for taking decisions
that may affect human lives.
That's absolutely right. So so yes.
So that's the reason.
Just hold that thought, because I want to
just Georgia, I think when you think
about why this is so important,
this is not just entertainment.
There's also reasons to not want to do
false entertainment and things like that,
because entertainment can lead to things.
But you're talking
we're talking serious decisions.
You know, we're we're going to use
I know we we're going to have someone
from Tallahassee on to talk about that.
We use A.I. within the battlefield.
We already do.
And how are the how are we evaluating
those decisions that A.I.
is making? Is it just about
what's the most efficient?
It's not necessarily
what's the most humane. Yeah.
So these are things we have to grapple
with.
You're talking about medical uses
we already use.
People don't know we are.
We're already using
we're using A.I.. A lot of people.
We already have been much more than just
be deciding whether you have credit
or you get credit or not.
For example, who gets credit or not?
I'm based on
what were the inputs, you know. Yes.
How those inputs played.
Like you said, sometimes they're biased.
They are they are bias off the right.
And I want to make a point here,
if you don't mind. Sorry for interject.
Oh, there's several types of bias.
You can have bias in the data, right?
Like the data is biased
in many ways in the world.
And you can have great software
engineers, future engineers working hard
to make sure that it's clean and balanced
and like perfectly leveled
so that when you put the AI in place
and you train the algorithm,
it's the best quality data
that you can have.
You can do so much always,
but then the learning of the itself
because the learns, you have to train it.
At the very beginning, it's
everything initialized randomly, but
you trained all the time at the adult
learning stage.
You can implicitly extract bias
or even if you have done a great job
in the future engineering under cleaning,
you can still generate biased reasonings
because the learning happens very randomly
and right now there there's
really no way for us to understand
whether your inference
is being biased or not.
You have done a great job in the future
engineering your data is clean,
you know that.
But now when you train your
AI and you're making inference,
you might not actually get it.
You might get to a bias results.
All right.
Let me you're saying because I think often
I think the case
is that the data is actually
is biased
because people that enter the end
or the data are biased.
Right.
So there's you know, one of the well,
the hardware as well, the sensors.
I mean, I'm not talking about human bias,
but that's what I'm trying to understand.
So that's my question.
I'm getting to You're saying
you're assuming that
the input has the people that they're
actually entering the data and playing
the data have taken precautions
to make sure that it's not biased and
and done things intentionally that way?
You're saying
even then after that's happened,
you can still come up with a bias
decision?
Yes. Yes, exactly.
After the taking of the data,
after the processing of the data
and having a clean,
you can still get bias in the AI.
And that's because the learning,
it's very random, very stochastic.
If you train another A.I.
later on the same data,
you're going to have a different
AI and it's going to have some difference.
That difference is bias, you know, I mean,
not directly, but but in a way you can
you can start getting different outputs
in different ways.
And one, I might be considering
some things more another one,
some other things differently, that
different process is just like you and me.
We are understanding whatever problem
we are talking about in different ways.
Sure, we can both give a similar answer.
It's not going to be the same.
We are taking into account
different things in our brains.
So that difference,
that's the one I'm talking about.
There's no way right now, no way
for us to really see what side reasoning
and that's what we do.
Reason So how do you how do you it's
first of all, it's admirable work
and I appreciate it.
We are trying I don't know if we are.
Yeah, well, I mean, there's
not a lot of people working on this.
No, there's not. People.
We have to educate the people.
That's what I say, right?
So it's not a market like.
Yeah, we were predicting, you know, this
or that not we are trying to do that,
but nobody is now it's
it's important work.
But this is what
I'm getting to right. It's
you're going up against
the current the current trajectory of
how to commercialize this one on hormones
like let's worry about
let's worry about ethics.
Let's worry about transparency
after we made $10 billion.
Right. Yeah.
How do you balance against that?
Like, really like
what is your message against that?
Yeah, I think, you know,
it's it's all part of
what we want to leave
to the next generation of people
because when they are facing huge
infrastructures
that are already in big companies
functioning very well
and then you have the problem
of transparency, bias or whatever
and really replacing all that
and like leaving them with that burden.
I think it's a it's a big issue.
So, you know, yeah, it's going to stay for
I mean, yeah, we're allies here like it
and it's going to be in every single thing
you can think of that has an electronic
sort of like even labels these,
these room is going to be controlled.
The light intensity
is going to be controlled
by the conversation and measuring how
well the conversation is going to
maybe go lower
when you know those kind of things.
We're going to see it everywhere.
It's just a matter of time.
So like being able to really trace
that reasoning is essential.
It can make a difference, like in
because the way I like to think about it
is if we are taking decisions
based on an output
that we don't understand
how that processing happened,
who's really making the decision,
the human or the machine?
That's so a couple of things on this.
Like one, I've talked about this in
the past, talked about it two years ago.
I think it's important
for us to continue to innovate.
But I think your your your your best case
that you put forward in terms of people
that are focused on the return is this
if we don't build trust within AI,
you can get a reaction
in the opposite direction.
And we're right where people
there's a mass just
backlash against innovation in
AI that really puts us behind
because we we live in a democracy.
And at the end of the day,
if people decide
that they want to elect leaders
that say no
at all, because we don't trust it,
that can cause greater damage.
So I would say building trust right now
is is paramount to make sure
that we can continue to align.
And sometimes it's
hard to tell people that you're building
could be on fire until it gets on fire,
which is the hardest challenge, right?
People, people like is not sexy to put out
put out fires,
but it's it's better people only
only focus on putting out fires, as you
say, versus actually preventing them.
Yeah. And what you're trying to do
is prevent it.
And I guess we got a lot of people
that really understand how big of a issue
this could be.
So I have another question with this.
What does happen when the algorithms know
us better than we know ourselves?
Is that a good thing
or is that a bad thing?
Well, I mean, it's it's just like
any other technology, I guess it depends.
If I may clarify a little more.
The question, are you asking
whether the algorithm is learning
information about us that we don't know
or or what what do you
what do you question?
I would just say
if the algorithms actually know us
better than we know ourselves,
like like telling us like,
you should go for a weekend in the
I don't know, in a lake?
Sure. Or like,
advising us to do something.
We never thought of that maybe
it's what we want, but have you seen.
I'll give you examples.
So let me give you a concrete example
since we're going to go into a weird
alternative Black Mirror universe.
Okay, So
there was this
show on Netflix X
and it was called
I think it's called the One.
I'm nervous.
Okay, it's fine.
I was last on a weekend doing something
I don't know,
somehow found extra time that I never had
and decided to go into what it was about
is using essentially
artificial intelligence
and some type of data algorithm
to say to help you find who
your actual quote unquote mathematical
one was.
Right.
And so you have people that are already
with significant others in other things.
And they got curious, they got on this
and then they got them to leave the stuff.
That's a wild thing.
But some of them were content before that.
But when they went into this kind of like
social media, people looked at like, Oh,
I can look on here.
And, you know, I look there's now
there's now 10,000 options.
It's not really 10,000 options,
but you feels that way.
That's what I'm saying.
Is that a good thing for us,
that the algorithm might know
what motivates you, what triggers you,
and it might do what social media done
in an amplified way.
Yeah, I think I understand the question.
Okay. So sorry. I was alone. No, no, no.
It was. It was. It was right on point. But
yeah, I'd like to clarify because
I have a different name for this one.
And so you're a recent searcher, I guess.
Yeah.
You figure out a way to go.
Okay.
No, seriously, I think it's,
you know,
it helps to have some sort of, like,
automated decision
making in your life, for sure.
But I'm a very, very classical person, so
I like to have some, you know,
uncertainties every now and then.
And there are some things that you
I don't know, I don't like automated,
but but yeah, I guess it really depends
on where you want to use the.
Yeah, but I guess we are more concerned
about A.I.
being used for our own things,
but it really is is in many other
like processes that doesn't directly
impact our lives maybe
right away,
like maybe on the guidance, navigation
and control of a nerve.
I don't know, like an airplane
perhaps, you know, but.
But like, you have a lot of like,
like maybe remaining just for life
prediction of like different components
that you have in your aircraft
or like there's software
out there,
All the systems are starting to integrate
AI And really the ones that you're
talking about, I guess it's it's
more like on, on, on our
they might change over the course
of our life I guess, right.
Yeah I think they could.
I mean small business. Yeah.
They are small ones now
you know I say that dramatic example is,
you know, they're like
fining or whatever
because I worry less about,
you know, terminate people
like a Terminator and stuff and Matrix.
I don't think per se
that's what it's going to turn into
every time
I have a conversation to explain why
it always it's, yeah, of course
it always does that.
But I know better than to go there.
I'm not going to have I'm not going
to have a Terminator conversation.
I promise.
Sorry. It's a I don't think that's
the likely future.
What is possible,
though, is that because I'm
seeing how social media has already got
people used to it.
I'm not anti social media.
I'm not I'm pro social media,
but I'm pro what I like to say
I'm pro informed consent.
What I believe is happens
with a lot of social media
is that we did not have informed consent,
at least not for the part.
No, not for the we do not know
if that algorithm was which information.
And Yeah.
So that that's, that's what we were trying
to. Exactly.
That's what I was trying to get to. Yes.
I hopefully I mean that's also perhaps
one of the main reasons why we are doing
this is to raise awareness
about like these technology and there's
other people that are building
other technologies that I believe can help
advance transparency in.
A Yeah, but,
but really what we are building on reason,
I think is it's a step forward on that,
like really giving the power to the user,
to the final user, onto the, you know,
and customer that they can understand
how certain inference
or predictions are being made.
Right? Yeah.
And I think that's very valuable
that that's the value of Right.
At the end of the day, you want
to make sure how everything happened.
It's about, again, going back to informed
consent, like I believe
when we got on social media
and everything else, it was
originally to connect with our friends
and to know
what's going on
with our friends and others.
And it evolved into
I mean, I guess it evolved in the media.
I like to say, you know,
even though I'm a media person,
I try not to be a quick bait person.
Maybe it's
why I don't have 10 million followers.
But it is it is what it is.
But I'll say that often media
is to the mind what sugar is to the body.
Oftentimes, particularly news. Right.
And how we consume in news is to the mind
what sugar is to the body.
It's you get it,
you get a hit form and it you know,
it feels good emotionally for a second,
but then it then it really drags you down.
And that's what social media
a lot has done with people
and they don't realize it.
Right.
And and I think the challenge is
how do we have informed consent?
I'm not saying get rid of social media.
I'm not saying get rid of
I we're saying together,
how do we create a more transparent world
where people understand
the decisions that they're making
and are truly given the autonomy?
Because I feel
as if you're saying, if we don't do this,
we're going to have
less autonomy and less understanding
of the world that's happening around us.
That is correct.
I think. I think so, too.
And it's it's a social right.
We'll see where it goes.
I mean, with with all these integration
of generative AI,
awareness has has increased
significantly and
I think it's going to be going in
that direction.
Right.
Everybody's going to request it.
I guess a couple of rapid fire questions.
Are you ready to wrap up? Yeah.
What does the legacy of Harvey look like?
Oh, my legacy. Oh, my. Yes.
I don't know.
I mean, I'm I mean, my late twenties.
I don't know yet.
My legacy.
Well,
that's why I want you to stop asking him,
I guess. I mean, my
my biggest
academic life has been focused
on explainable A.I.
working with Dr.
Kelly Cohen and other people as well. But
if if I can do anything
that is meaningful, I believe it might be
within the field of explainable.
That's also why I'm trying to do reason
and and all this.
It just makes sense.
Yeah. Okay.
All right. Next question.
If you had a theme,
I don't know if I answered very well.
No, it's fine.
It's good. You've answered honestly.
If you had a theme that was the it's
the story of your life or the saying,
what would that scene or theme be?
Do you
have one that I really like?
It's hard from it's of mine,
but the world is like it's like a book.
And those who do not travel
only read one page.
Oh, that's good. Yeah, that's good.
So it's like a book.
And those who don't travel is like reading
one page.
Yeah. Yeah, that's awesome.
I think it's really important to travel
and really to get to know
the different cultures
and understanding how they work,
how they how they socialize,
how they do everything.
And from there
you can learn so much and get new ideas.
Every time I travel,
I get inspired on many of my
because ideas have been a consequence
of of a great trip.
Oh, I completely agree.
You know, I tell people
I spend a lot of my twenties traveling and
and if I could
go back, I wouldn't
I wouldn't have split it any differently.
Like, there's
nothing more valuable than experiences.
Like if you if you have $10 billion
and you haven't traveled,
you don't have any wealth,
You just in fact,
you are a prison of what you of your
of your resources, because traveling
is the most transformative thing.
I agree. It is connecting with cultures.
It's really
what is why we do disrupt our two.
And it's because we want to connect
with creators all across the world.
Is this to me understanding
learning cultures is just interesting.
But to your point,
when you're in a foreign environment,
it forces your brain into another.
Yeah, the brain is like.
It's like it's the spine.
It absorbs everything. Yeah.
And if you don't
get used to that plasticity, it's
it starts to be more rigid.
That's exactly right.
And it's funny because the people
that I've met, at least in my in my life,
that are very, you know, not flexible
usually they haven't traveled that much.
Yeah.
And what you have to guard against and,
you know,
as we advance in age
that we don't become so rigid, you know
I guess to my podcast Rodney Williams,
that this is the most important skill to
his success was learning how to learn.
And you know, learning
is a constant journey and the challenge
when people gather
a little bit of knowledge
because none of us have a ton, even
the smartest of people
have a very small corner of knowledge.
But when you get
when you have success, you
then become hardened to actually
understanding new perspectives, which is
why going to a new country humbles you.
Because you realize this is something
you don't know any of this.
Yeah, you don't know any of the culture.
You don't understand any of the language,
the history for all of your knowledge.
You are a baby here
and it's that humbling reminder
when you stand in front of the ocean
of how small you really are.
Yeah.
And when people lose that perspective,
they lose their way.
Even, you know, Albert Einstein, one of my
one of my favorite people in terms
learn from and obviously a genius.
And he was humanitarian, but
he didn't see
the next evolution of physics.
He actually rejected it. Right.
Even though he was the founder of it.
He rejected that that was coming
because, again,
rigidity can go in the mind
once you have something.
So I think that's the challenge to
everyone is to is to keep that perspective
and and in a humble perspective
when it comes to knowledge.
All right.
Yes. So like.
All right. So a final question. Yeah.
All right.
You have three advisors
for business in life.
Who are these advisors and why?
Paul McCartney, The first one.
Who's that?
Paul McCartney. Okay.
Why he is
he's been so creative
and really creativity to me.
I think you build that ability
to be creative.
You can be, you know, born with, with more
or less capabilities to be creative,
but really to be humans are are excellent
in creativity and it's really
what's driving us and everything.
So I think Paul has a huge amount
of creativity and I,
I, I really admire him for that.
So yeah,
I also like writing songs and music, so.
Oh, you do that. Okay, everybody music.
But you know.
All right, all right.
We're going to we're going to do some
music on this trip down to the platform.
All right. We're going to have to
one of your creative side.
I'm going to get you to drop some music.
All right?
Now that I know this. Okay?
I'll and yourself and everyone.
But don't play
it. So play the life of this.
But. And then.
I don't know.
Yeah, my mother for sure.
Okay. Yeah,
My mother would be on my list, too.
Yeah. Yes.
And well, well, well, that's three.
That's three. You know, you don't have to.
Doesn't mean
you don't love the other people, you know?
I mean, I'm sure it is. They're married.
It's.
It's their doctor call. It'd be on a list.
There's lots of other people
that have been helpful in
and both of our lives like, yeah,
if I started out as a harpist,
you'll just you'll be there forever.
I want you just to go with the
with the top three.
Okay.
Well, obviously, we all live
in their apple, so you're good.
You're good out
here. It's been a pleasure.
Pleasure is glad you come on.
Thank you very much.
Yeah, We're here at Midwest Con 2023.
Rob Richardson with Disrupt Art You we're
taping at the Digital futures building.
It's been an honor to have Harvey
Air Vienna
his company is reason
Make sure you check it out.
Make sure you understand
explain the way I and what it means.
But until then, we'll see you
next time. Thank you so much