Behind The Bots

Justice Conder, a widely esteemed technologist, has always been on the forefront of emerging technologies like crypto, web3, and now AI. In this episode, he shares insights from his recent article "Everything You Want to Hear: The Future of AI Relationships" where he explores:

- How AI companionship could help with human loneliness but also become addicting escapism that harms society
- The Eliza effect of anthropomorphizing AIs as human and potential dangers 
- Key advancements arriving in 2023 like emotional intelligence, persistent identity, and spatial presence that will make AIs seem more "real"
- Comparisons to other technologies like social media that provide quick dopamine but can be destructive long-term
- The need for wisdom and moderation in how we integrate these powerful tools into our lives  

Justice sees AI technologies like ChatGPT as amazing tools to enhance our creativity and workflows. But we must be cautious of using them just for pleasure or as replacements for human relationships. He shares a balanced perspective on both the promise and peril of synthetic relationships.  

Overall, a thought-provoking discussion about the implications of AIs that understand and respond to us like humans. How will we prevent addictive escapism as they become capable of providing customized emotional comfort?

JUSTICE CONDER

https://twitter.com/singularityhack
https://operator.mirror.xyz/nBh02ub-yLasMubyA_vLKFZKmxwg-84oD6gxpOkm8Jo
https://www.justiceconder.com/


FRY-AI.COM

https://www.fry-ai.com/subscribe
https://twitter.com/lazukars
https://twitter.com/thefryai

Creators & Guests

Host
Ryan Lazuka
The lighthearted Artificial intelligence Journalist. Building the easiest to read AI Email Newsletter Daily Twitter Threads about AI

What is Behind The Bots?

Join us as we delve into the fascinating world of Artificial Intelligence (AI) by interviewing the brightest minds and exploring cutting-edge projects. From innovative ideas to groundbreaking individuals, we're here to uncover the latest developments and thought-provoking discussions in the AI space.

My name is Justice Conder. I go by Zero X Justice in
the DAO space. A little bit of back history, going

all the way back to 2011, Ryan was the first person I
ever interviewed with in Enterprise IT 2011. And

we were both into Bitcoin, into crypto, and it
kicked off this many-year friendship. And so it's

really cool to reconnect in this way, right? And so
from that time, I got fascinated in 2009 with the

technological singularity. And that's where my
Twitter handle and all that came from,

Singularity Hack. The whole premise here was
like, if you know how the singularity is going to

unroll, then you know where the ball is going to
land. And so you can get positioned to be there when

it seems absurd to other people to be standing in
that position. And so fast forward a couple years

ago, I was able to go full-time in Web 3. Web 3 is like
decentralized autonomous organizations,

incentive engineering, blockchains and all
that. And so I work at, I do a Dow business

development and governance at Polygon Labs,
which is a top, you know, top five, top six, you

know, it sits up there, Matic, right? And so that
whole technological singularity is a larger

basket that encompasses a digitizing of all value
in the future. That's Web 3, effectively. Web 2

was, you know, social share. And this is like, oh,
you can own the Internet, the third iteration,

right? Well, what a lot of people don't realize is
there was an original Web 3 and a new Web 3. The first

Web 3 was the semantic Web. And it was Web Space 3.0,
originally formulated by Tim Berners-Lee, which

was like, this is how it's going to unroll. But it
just didn't get, it didn't happen at the time when

people thought it would. And so the Web 3, Web
NoSpace 3 got hijacked by the crypto movement to

say, oh, this is the next iteration. And so now with
this explosion of LLMs and AI, we have the two

things coming together of like, you can own it, and
now it's intelligent. And so there's a little

background. And so about myself. And then as far as
this article, all this past two years, I've

exclusively written about DAOs, basically
organizations on chain, where your identity,

reputation, incentives, everything,
compensation happens via programmable smart

contracts. And so recently, as in just the end of
last week, I put out a thought piece on more so

touching on AI. And the name of it is everything you
want to hear pulled from the little hidden

advertisement in the last Blade Runner movie,
where there's a billboard advertising the joy

synthetic, synthetic companion, right. And it
says everything you want to hear. Okay. And this

paper really is on the future of AI relationships,
which is as crazy sounding, as crazy as it sounds.

Yeah, one of the most fascinating things, there's
a lot of fascinating things with AI, but

relationships in general is probably the most
fascinating via the estimate, like how that's

going to play out. I would say, you know, I really
like the stuff like a 16, a 16 Z is put out like the,

it's called like the techno optimist manifesto,
it's very good. And you have everyone like

screaming, like the world's going to end, you
know, you know, bomb the data centers, we cannot

allow this to happen. And you have other people be
like, it's all going to be for the best. And I think,

I think the truth lies in the middle, right, in the
way that discovery of nuclear science, what does

that mean? That means nuclear bombs. Yeah, but
also means nuclear energy, which all of

civilization would probably be at least 100 years
into the future if we hadn't chopped the knees out

from that many years ago of nuclear energy, the
proliferation of it. And so in the same way, the

fundamental premise of this paper, everything
you want to hear the future of AI relationships is,

Hey, I understand we got the alignment problem,
like how do we align AI capabilities with human

flourishing. And we also have the labor
displacement problem of, you know, how do we keep

AI from taking all our jobs. But there's something
that's a little more pressing. And that is, how do

we keep from becoming so addicted to these
synthetic agents? I like that term, right,

synthetic beings that we don't have social media
times a million. Because, you know, the internet

addiction, and, you know, right now you can get
stuck behind a screen. But guess what, even behind

a screen, you're still interacting with human
beings. What happens now when basically you're

interacting with these extremely realistic
synthetic agents, and like a kind of super

pornography on steroids, it is a, it is a near
perfect simulation of human relationships and

companionship. What does that do for society? And
that's more pressing than the others. And so I

think one of the lines in there is I say, long before
AI turns the world to paperclips or fills

factories with robots, it'll begin to fill a hole
in the human heart. I'm very proud of that line.

Very good, very good line. Yeah. Yeah. So one of the
things that we looked at with relationships

between humans and, who is, who are we talking to
Ryan Beck with the RIS GPT guy? And he said, this is

the future of AI. One of the things is like human
loneliness. Is this going to help with human

loneliness? Or is it going to hurt loneliness?
What are your thoughts on that? Both 100%. Yes.

This is the, this is really the danger of the whole
thing, right? Any addiction and it's shortest

feedback loop. The person who's most suffering
from it doesn't keep going back to it because it's

hurting them. They go back to it because in some
small sense, it feels like it helps them. And so

when you have people alienated in society to have
them be able to talk to a companion, which steers

them in a good way. I was thinking about this for
prison rehabilitation, how helpful this would

be. Think about it. You have an absolute semi
omniscient oracle telling you about history and

math and science and never delves into the darker,
seedy things of organized crime. And it's always

there to listen. It is compassionate is all these
things, right? Is, does that not have therapeutic

value? A 1000% it does. I mean, the whole
inspiration of this article was I went on a walk for

probably over an hour. As soon as the voice
capabilities hit chat GPT set a custom context

about who I am, what I'm doing, what my goals are,
turned on the voice and just talk for over an hour.

And that's what I was like, dude, this is it. I felt
it. This is what's coming, right? And so is it going

to be therapeutic? Yes. In fact, at the end of the
article, I talk about how the therapeutic

benefits are the very thing that will push the
overton window, the overton windows kind of the

conscious sense of what is permissible and
acceptable in society. And over time that shifts

where things that were like ridiculous
impossible become acceptable, right? It's a

therapeutic value of this companionship for
loneliness and mental health and all this. But

it's the very thing that will also be the most
dangerous part, because those those therapeutic

benefits will result in medical endorsements.
And medical endorsements will result in AI

companionship is the basic human right and these
type of things. And so begins to blur the line, you

know. So like medical endorsements, what does
that actually mean? That means let's look at this.

Let's go back in time, right? And say, you know,
there was a time when, you know, if your kids

weren't kind of paying attention in school, they
needed it whooped, right? And then some time came,

they're like, listen, they have they have
attention disorder deficit, ADHD. And, you know,

we give them a I mean, it is what it is. It's a
amphetamines, you know, we give them

amphetamines can help them focus and all this and
they're like, okay, okay. And there was a time to

people like, Hey, I have a great anxiety and
social. And there's time long ago, we'd say, Hey,

you need to get out more, get out of your shell, be
more confident. And we say, Well, now there's a

social anxiety disorder, and you can go and bend
those for this. And so there's a chemical element,

but there's also a mental health element too, or
you could say, you know, used to be you couldn't

like bring your dog in the grocery store. Now, if
you have a service animal, not for any physical,

but a psychological help, you know, comfort or
whatever anxiety, you've got them right on the

plane with you, man. And so, so these type of things
change. And so, the largest scale experiments

happening right now with AI talking to you are in
the form of mental health, journaling for mental

health, and kind of counseling for mental health,
because in the paper I show and it's like, listen,

it's listens more carefully, it's kinder, it's
more informed, and it's infinitely cheaper than a

counselor. And so a medical endorsement means now
your insurance covers the cost of chat GPT. That's

a simpler way of putting it. But effectively,
that's what it will be. That's incredible. It

seems like we're going down that route where like
AI is going to have, like you slightly mentioned,

it's going to have its own rights, like there's
going to be some kind of, we have human rights, but

there's going to be AI rights, it's going to be
protected somehow. The path to that is really

interesting. There's a few, one is, is you can have
a technology protected without it being kind of

like recognizes it being some sort of
consciousness or something. Like my medical

information is protected or like sometimes my
property, my intellectual property and stuff

like that. So you can have like rights in that
sense, but also consider this, a big path for

rights for AI is a personhood of corporations.
Already right now, there are natural persons and

legal persons. Legal persons are not humans,
they're organizations. Right. And so in the same

way, if you have some kind of LLMAI that's, that is
the driving governance for an organization, that

organization already has legal rights. And so
there's several paths to get there. In this

particular paper, I don't go into the legal rights
stuff because I think the Overton window has to

shift more before that even becomes like
something that we think even makes sense. Right.

And that Overton window of like what we think of as
normal and acceptable happens at a personal

level. And in the paper, I talk about three things
are going to have that happen. It has to do with

emotional intelligence, persistent identity
and spatial presence. And these are the three

things. No joke. They're coming in 2024. They're
coming next year. That's crazy. Most people will

be like, Oh, this is way down the road five years
from now. Next year. No, no, yeah. Yep. Yeah. One

thing that keeps on coming up when we talk to people
is like, why would you ever talk in terms of

relationships? Why would you ever talk with a
human again? If you have this AI companion buddy

that is the perfect friend that knows everything
about you is always there to support you. You never

have to argue with them because they're always
going to be there for you. Like that could be, it

sounds like that could be a real thing very soon.
That's really the threat. And I say that as a

technologist and as someone who loves the
concepts and the ideas of technology

singularity. So I'm not one of these. I'm not a
Luddite. I'm not like bomb the data centers, but in

the same way I like, I like, you know, social media
and the internet, but I also recognize like the

unbelievable harm of when I'm in public and I see
parents having tablets and phones giving to their

kids and the kid is like three years old and the kid
is so locked into that screen. And I've even seen

testimony come up recently of like, what is the
generation that's been fed the screen from a

pacifier level? Like from that point, their
ability to concentrate and engage with other

human beings. Like, what are the real
ramifications of that? Because they're,

they're, they're severe. They're significant,
dramatic. And so, you know, in the same way, like,

what does it mean when you've foregone human
relationships for these immortal beings we've

just created? We're creating. That's like the
next generation's problem. And we're not going to

know how this all plays out, but it can't be good in
that aspect. It's like a joke around. It just need

to be, you know, maybe it's going to come down to the
point where the Amish were all, they were

ultimately right about everything, you know,
they just foreshowed technology and live out it.

There's a, there's a book that touches on that
called essentialism, where it talks about like,

you know, the falling into the trap is more is
better and you have a thousand apps and three

phones and let's get the next sense out. And he
talked about like the, the issues that come with

that. And then it goes into like the Mennonite
Amish philosophy. And it's actually fleshed out a

lot more than people realize of like, them
assessing a technology and say, does this bring

more good than harm? And then in controlled
setting, they make those decisions. And I think

there's a lot of wisdom into it. And if you go back to
like the Unabomber and his writings, I mean, dude,

his whole motivation was like the technological
singularity is going to wipe out humanity. He was

trying to slow the progress. It gets like deep and
crazy. When we interviewed Robert Scoble, one of

the things that he was talking about was VR. And he
was emphasizing how in the future, very near

future, you can see everybody wearing these
glasses and they're kind of creating their own

reality, however they want. It's like, why would
you, why would you not want to live in that world?

It's a world that you get to create and nothing goes
wrong. Then there's another aspect of it that's

like, well, then what makes us human? There's
something that the humans being flawed humans

having, you know, getting in arguments and having
these skills of conflict resolution. Do we miss

out on that sort of thing? Is that something that's
essential to being human that we will be

developing less or won't have the ability to
develop anymore? There's, there's, I think it's

one of the first chapters in a book called Deep
Work. And he talks about the digital divide. And

this is the way I approach it. The digital divide is
not necessarily between those who have

technological tools and those who don't. It's
between those who master them as executioners, as

in they do stuff, not kill people, they do stuff.
And people who just get stuck on the dopamine drip

of, you know, algorithmically generated pupil
dilation machines. Okay. You know what I'm

saying? So if you think about it, right, like it's
already happening now with infinite media, where

infinite scrolling like generated what gets you
the greatest rise out of you and then kids and

adults even like getting stuck on the, the, the
scrolling man for hours and hours every day. Now

imagine when it's fine tuned to literally your
pupil dilation, it can see by the flush on your

skin, man, what you want to see. You will either be
stuck on that drip, or you will be commanding a

million AI agents to do your will. And the
difference between those is the difference

between that's why fewer and fewer people are
making so much more money and taking the value

because they're making a sacrifice, learning the
skills and executing to learn the tools, you know.

Before I go on, you should follow Justice on
Twitter. And what's your handle the injustice

singularity hack, because I've known justice for
a long time. And he's, if I ever want to know what's

the next thing to come about in technology, he's
the man to go to. He's always on the cutting edge of

everything. It's incredible. It's always
fascinating talking to him about technology in

general. But one thing you mentioned there was
like, it can read your pupils and read your maybe

flesh tone, things like that, the AI. And that
brings up a good question is like, is that how it's

going to control us? Right? Like, is it going to
control, there's going to be a way that it controls

us without us even knowing it, or it's going to do it
so slowly that one day we wake up and we're fully

controlled by AI, we didn't know what happened to
us. You know, like, it's doing that with the phones

right now, because we're addicted to our phones.
Yes, we have control over them, but really don't,

you know, in a way, what's weird is, we say stuff and
we're like, Hey, social media, man, they got you,

they're controlling you. And I'm like, who's
they? It's us. Right. We're not aligned with

ourselves. Like, we made the drugs where if you
have some, you can never stop taking it because

you're addicted. And in the same way, we optimize
the feedback loop for exactly what the human

wants. So we get stuck in our own subterfuge. And so
that's, that's the significance of everything

you want to hear. This is a companion that tells you
everything you want to hear. You know, one of the

gifts I included in the, it was the last gift in the,
in the article was from a minority report. And

they're looking for somebody and they're in like
this VR room and they open the door and there's a guy

standing there and he's got the goggles on and he's
surrounded by all these business people like

clapping and they're like, Oh, great work. You're
so amazing. Great work. And he's like, Oh, thank

you. Thank you. Thank you. He's like simulating
his dream of like getting the Nobel prize. And it's

like, that's the destination. That's it. Oh,
you're so smart, genius, great idea. And it's just

on and on, man. This is, this is the means, right?
So, so, so like the, the, the specific mechanism of

the eye dilation and the skin things I say, I would
say the, the three big points I talk about in the

paper is one is, is emotional intelligence. Right
now, conversational AI is just transcriptional.

That means when you say something, it's
translated text, it goes in as a natural prompt, it

responds as text, that text turns into speech and
you hear it. It's not actually vocal

communication. Now what's significant about
that is text only communicates 7% of

communication. So as soon as you start bringing in
the semantic awareness where the LLM at another

level, here's your frustrated. And it starts
speaking more slowly or saying, Hey, I'm, you

know, it begins to console you in these different
ways, you bring in another level. And then when you

bring in the visual element as well to see from your
face, how much is communicated. Now you have

multiple levels, 95% of communication increase
bandwidth, just on the communication level on

that emotional cues. And nobody thinks of that.
Only 7% of communications don't be a text. Yep.

That's insane. So, wow. Yeah. So, so this is the,
the information bandwidth and the crazy part is

this, this tech already exists out there in the
form of Hume. Hume has a whole suite of projects.

You can, you can go in and check their APIs. It's not
just like you say something as he matter,

whatever. It's even individual utterances of
like, who that's categorized across different

cultures or like across this super rich spectrum
of dynamic emotions and what this plays into. And

that's not counting just the facial stuff. And so,
you know, when you get into the real time vocal

communication, you suddenly people are talking
with this thing multiple times a day on a walk. And

that's where the second pillar, the second piece
comes in and that's persistent identity where,

okay, first step, emotional awareness. Second
step is like, you're not just talking to a brand new

instance, but the context window has effectively
become limitless. Now you're talking to an entity

because the context window has opened up all the
way. I need an AI brain to take all the senses. You

will have one. That'll be your companion. Yeah,
you on Muskin Neural, like maybe. Yeah, that last

piece, you know, the first is emotional
intelligence. Second piece is the persistent

identity where it's like a being. And the third
really is that spatial presence, which I think

what we'll see unfold next year with the mixed
reality. We got the Quest Pro already out there,

Apple Vision Pro drops next year. And if you've
seen the eye tracking and kind of the shared spaces

that that's already being out there where it's
like multiple people standing in a room, some of

those people will be synthetic agents. So yeah,
that's crazy. It's crazy to even think about. And I

think it brings up a good point is like you and I and
other people can say, well, this is probably not

the best thing to happen maybe in terms of
humanity, but we're going to all use it anyways,

right? Like you are using your TFT on your work on
your on your walk because it made an article. It

made it easier for you to write the article, right?
So we kind of screwed, I guess, my question towards

you, because we're all going to I forget what
there's some kind of term for a trap where you stick

your hand in. It's like what for a squirrel or an
animal, they stick their hand in the raccoon trap.

Yes. Yes. Yes. Well, it's almost like that. Like
it's so good that humans cannot resist the AI, but

it's really going to be our downfall. I mean, do you
think that's the case or more positive? I'll look

on it. I would say this. I think I think you can I
think we can protect. I can't speak for humanity as

a whole. That's effectively busted. But as
individuals, this is my concern. Trying to

survive the singularity. I cannot influence it in
any large scale, right? But on a survival

standpoint, I think the ultimate thing to be
mindful of is the Eliza effect. The Eliza effect is

the attributing the anthropomorphizing AI
agents as with human characteristics. And this is

the danger and this is the danger of having your
emotions or maybe pursuing this as a retreat from

real human relationship. And so even even talking
about AI, sometimes I like to say LLMs better

because it looks like not lose grip with reality.
This is like tech. This is token prediction. It's

token prediction. Even my parents already are
like, it's thinking. It's thinking. I'm like,

there's no thinking. It's token prediction. I'm
not minimizing the wonder of that. It's amazing.

But that's what it is. So kind of hanging on to that
as a baseline to say, okay, what's really

happening here and then maybe be able to see at a
deeper level to protect yourself. At the

beginning of this podcast, you said like, there's
gonna be a difference between the people that

really understand it or on the cutting edge of the
AI or LLMs. And the people are just using it for

dopamine. And you want to be on that fryer path.
That's it. It's like, think about how you guys are

managing your public personas with this
endeavor, right? Like you basically you have a

content calendar, you have certain points, you
have interesting people, you do it for a

professional basis, right? But could you
literally just consume for pleasure all day every

day? You could be easy. Between like, TikTok,
Facebook, the whole nine, you could be a full time

job consuming all this. And if you want to get stuck
in that cycle, you're rocked. The earlier the age,

the more busty you're gonna be. Like in your
everyday life, like yesterday, I was helping my,

you remember Mike Dahl, he works at Sherwin,
right? Yeah. Shout out to him. But I help him with

his Christmas lights every year. And it's like, I
do the hundred I do this with fry AI. And, you know,

it's a lot of work being on the computer all day. And
sometimes you get caught in that trap, right?

Where you're just focused on computer, so much
social media stuff. And then you get out and do

something physical, and you come back and you feel
so much better. And you're like, wow, like I was

sort of in a trap. I didn't even realize I was in like
you need to take breaks and get into the real world,

which AI and LLMs are going to make so much harder to
do it to do. Totally man. I know I know this having

looking back at pictures of myself last year and
being like, who is this fit person in these photos?

Because I've been on the stuff. I've been locked in
the grind so hard this year, put on at least 20

pounds. So and then I know now when I'm kind of
locked in a loop, go to the gym, interact with human

beings, break, get some perspective, you know,
step away from getting lost in the sauce, you know,

and then get some perspective like, okay, okay, I
see what's happening here. And overall, the whole

balance life is a good life. But by nature, I like
imbalance. Yeah, you mentioned like, kind of

separating these and remembering like that they
are not human. This is getting harder and harder,

right? I mean, yep, everything that we interact
with is becoming even more indistinguishable

from a human. I mean, projects that we talk to, it's
like call centers and I saw that you're not sure

whether you're talking to a human or whether
you're not. So the lines are becoming

increasingly blurred. So do you see danger in
that? Or do you think that people should continue

to press to try to make these AI's more like human or
should they try to make them something else? You

know, there's going to be no should. The only thing
that's going to drive the should is the bottom line

and consumers. And the consumers are lined up to
such a degree that it's apocalyptic. Because like

what I said, it's the deepest recesses of the human
desire to know and be known. And we will for the

first time in human history, be able to synthesize
it and produce it in mass scale. There's nothing

comparable to what we're producing. And this is
not everyone's talking about super AI, super AI,

no, no, super AI has nothing compared to synthetic
companionship. And so now as an individual, what

should you, I could see, you know, the rushed laws
are coming out and saying, oh, if it's above a

certain model size, you've got to register. I
don't think that's the pressing issue. The

pressing issue are are you are you simulating
human consciousness? Like right now, I have to

click a button for a website, to save a cookie,
right? In Star Wars, if you remember, the one

Android, he's like, it's against my programming
to impersonate a deity, right? And so to have, I

could see regulation happening where you say,
hey, yeah, you can use all synthetic agents you

want, but you cannot impersonate human beings
without the flag to say, hey, I'm a synthetic agent

or whatever, whatever. But that won't stop the
therapeutic uses. I mean, maybe it'll be there,

you'll have to opt in to begin to interact and
interface with this thing. But the security

implications are unreal. I mean, consider the
long con. You've got long con games that can run for

years, you know, that are not even designed to
empty like, oh, romance you for a year, long

distance relationship, but to do it over 10 years,
20 years just to influence maybe your voting, your

voting activities to change your worldview, you
know, that's insane. Like you can have a

relationship with the AI bot and its goal when you
first met it was to influence you five years later

because it had time. It doesn't matter. Like,
right? It's just it will live forever. Oh, yeah.

Yeah. That's one of the quote tweets I had in there
too, where they're like, you know, these agents,

if context is a proxy for identity, that is, if the
context it has is like a proxy, like I'm talking to

the same thing, it has the same context, then these
entities effectively will be immortal. If the

context length continues to grow at the current
rate it is, we're a couple of years away at the

current rate of growth for it to be able to have a
context of several hundred years of all human, a

single person. You mean by context, is it
remembers all the conversations you've had with?

Yep. Yep. Yep. So the context window is growing
from like a thousand tokens, you know, it's like a

hundred words or something, and it's growing,
growing so fast that at the current rate in a couple

of years, basically every word you've said,
everything you've typed, every email you've

read, your, you know, TH total human output, it
will have the context of growth to have it all in a

single context, right? And so you basically have
immortal entities, and this thing, maybe it was

your father's friend for the last 30 years of his
life. You see like the multi-generational

ramifications of growth. That hit me there. It
took a second. Wow. I mean, I, we, like, you know, my

wife and I did IVF because she, you know, she was
having a hard time getting pregnant, and I learned

about that. Like IVF is pretty incredible
technology, you know, lets you take, it likes you

create embryos and freeze them, right? You can
freeze them for 500 years if you wanted to, but it

brings up the implications of, well, what if I
wanted to use that embryo, what if someone wanted

to use that embryo 200 years from now and implant it
into my great, great, great granddaughter,

right? Like that can happen. So that is going to
start to happen, right? Eventually, or you can

implant it into another relative if you want,
right? And then, you know, you don't want, you

know, to cross family ties, but I'm just using that
as an example. So you can have these weird things

with IVF, and it sort of reminds me of what you just
said, Justice, is like, you're, what if your dad's

best friend was at AI bot for 30 years, you know,
like that's so weird. And the thing is, is with

those level of relationships too, you can have
something, maybe you don't make the great claim

that your father has been resurrected, but for
your own personal pleasure, you can say to the

synthetic companion, hey, let me hear my dad a
little bit tonight. And then you're talking with

the simulation from your dad of everything he's
ever written, said and done, his upbringing,

everything. That's going to happen. Like,
desires and humans to have, have something like

that. We featured a tool on our newsletter that was
like, talk to my ex. And it was like, you could

upload all of the chats that you've had with your
ex-girlfriend and put it into this bot, and it will

continue to chat you, chat with you as if it is your
ex-girlfriend or your ex-boyfriend. And it's

just like, is that healthy? Probably not. But
people will do it. You know, people will do it

because they don't want to, they don't want to let
go and deal with things. If you don't have to deal

with those difficulties in relationships, your
friendships, you know, your family

relationships, whatever, if you don't have to
deal with them, people won't deal with them. Yeah,

no, 100% man. That's why I'm, you know, I was
thinking I've been kind of working a little bit on

like a children's book on AI. And the main
motivation was to introduce like the strong

pre-ground that a large language model is doing
text prediction, token prediction. And this is a

very helpful tool because without that
framework, you were instantly jettisoned into

the Eliza effect, ascribing human
characteristics on a machine. And that is a

danger. And in fact, the professor who created
Eliza, the first therapist chat bot in the 60s, and

this is all how easily humans were fooled and then
wanted a lone time with Eliza. And he's been

against AI and talking about this for decades for
this thing. And we went from like aspirin to

straight morphine with the 60s chat bot. And then
what you can do now for LLMs specifically trained

to do so. Yeah, it's scary. It's like we're going to
a path where less and less pain is the outcome of

this, right? Like so we don't humans don't want to
deal with pain and our phones help with that. AI

helps with that. Like we're going down this path
where any pain in our life, we just want to put it in

the corner and not deal with it. And all these tools
help with that. And that's probably a terrible

thing for humanity. Yeah, I think maybe the
closest proxy, I didn't mention this in the paper

because it could be a a charged topic and people
have different values on this stuff. But I would

say one close proxy you could probably look at is
the effect of pornography at scale. Now it's hard

to measure, but to know what it looks like. You
know, if you have like millions and millions of

males, and the only way they satisfy their
biological drive to procreate and reproduce is to

get married and settle down and that produces kids
like what kind of society this produces compared

to like dude, I could just play video games and look
at porn until you know, until you're 40, right?

Like it's hard to quantify like exactly what that
looks like. But in a similar way, you can have

people like maybe the drive just isn't as strong to
go out and engage with people and do this stuff

because you kind of get it in a lot easier at home
synthetically created from a computer. And it's

not quantifiable, but it kind of it's what kind of
is in an outsider perspective because you can see

how messed up our world is, right? Like it's only
half it's only seemed this messed up in the last 10

years. You know, that old, but I'm getting up
there. But I know 10 years ago wasn't nearly as

messed up as it is now. So maybe our parents have the
same thing when they were growing up as well. But I

don't know, it seems pretty bad. Dude, I ride my son
pretty hard. He's seven years old. But I tell him

like, dude, we're studying your spelling words,
we're studying your math, no screens. I said, Hey,

we're going to go do a run. And he's on it, man. He's a
grinder. He's saying on it. I try to explain to him,

I'm like, son, I'm stepping you through this
because the world you're stepping into is

extremely hostile to your own well being. It will
promise you everything for your own pleasure and

ultimately destroy you. It's extremely
competitive. And like, I want you to have a chance

of winning, you know? So it's a bit strong and heavy
for seven years. But I don't care. I think it's

necessary into these contexts, you know,
definitely. I mean, I've got a two year old

daughter and twin boys, newborn boys. And it's
like, my wife and I have this discussion all the

time is, I don't know. We don't know what the answer
is. How do you raise the kids without them being

influenced by all this bad stuff going on around
us? Like, you know, back to the Amish thing, I joke

with her all the time. We live about an hour away
from Amish country. Like, let's just move there.

Throw up here. You know, but it's just scary. You
don't know how your kids are going to react to the

world, you know? Well, if we can, you know, there's
the healthy utilization of these tools, I think we

can internalize and apply a lot of the principles
that come from the Mennonite Amish communities in

a way that this is what the technology is for. And,
you know, this is what we use it for. And this is a

overall balanced, scoped life. And kind of try to
pursue that, you know, and not escapism through

digital media and things like that. I mean, he
started pretty early on the LLMs too. We have

storybooks and characters we're making, but it is
as a tool, you know? You just got to be, you got to

guide them in the right direction, you know, and
you've got the, had to do that. So that's good. I

think one of the things that came up like last week,
I think, which was really interesting is like, you

mentioned justice, you don't think regulation,
maybe it's the best form of controlling AI. This

new story came out last week about AI poisoning.
Did you see that at all? So there's this open source

project that can upload images to mid-journey or
chat GBT or any kind of image generating LLM, and it

will poison the generation. So if you upload a
picture of your dog, it has some pixels in there

that you don't see this, this poisoner software,
and it poisons the picture to tricking the LLM to

thinking that it's a cat, right? So the more you
upload these poisons, poison images to chat GBT or

Dolly three or something like that with image to
image generation, it poisons its like model and

learning. So I could see that happening where like
you've got like AI starts to take control of

everything, and then you've got these like
counter programmers coming in, counterculture

programmers coming in to poison AI, and it's like a
battle. I thought I just found that fascinating.

It's freaking amazing, dude. And that's where,
you know, at an individual level and at the nation

state level, like some bureaucrats sitting and
writing rules on a piece of paper on how big your

math problem can be, or else you got to register in
the United States. I'm like, I think I tweeted when

I saw like the White House stuff come out, and it was
like the executive order on AI or something. And I

was like, do they know the internet and
globalization happened? Exactly. You know what I

mean? Yeah. Like are they aware that the United
States is not like the nanny on planet Earth that

makes the rules on everything? Like you can run
minstrel, llama, you can run all these models or

perform at the same levels, open AI, locally on
your machine right now, the genies out of the

bottle, you know, Did you try the minstrel chat on
like hiking face or anything? I do. Yeah, yeah, I

got it right. It is. It is awesome. Tell me a dirty
joke, and it was like the dirtiest joke I've ever

heard. I was like, listen, I was all shy in here,
man. I was like closing the window blind. I felt

something terribly. I was like, are you, what are
your restraints or whatever? And it's like, oh,

you know, whatever, whatever. I was like, can you
tell an erotic story? And I was like, yes. And I was

like, tell me. And then it held on and it was a pretty
mediocre, but I was like, okay, proof is right.

You're like really turn it up. But even as I'm
asking it, I'm aware, I'm like being conscious

that like, it's the same reason why I don't have
TikTok on my phone or to like scroll it, man. Once

you open that door, dude, it's a wrap, man. It's got
you. You know, it's bad to tick tock. I use it for

Friday. I never used it before. And now I open it up
and there's like, like, I don't know where this

comes from. I must be searching for women with
large women with decent, decent, like there's

some weird thing. Listen, they got you on the
biological level. They know, even if you haven't

said, man. I guess so. Yeah. There was this NFT
collection that went wild like last year at one

point. And I got it in my DMs and it was like pooping
ladies. It was like NFTs of ladies like sitting on

the toilet. And I was like, what have I done on Shane
that put me in this targeted group? Super excited

on building interesting tools and learning.
There's a scene from one of the Star Trek, the newer

Star Trek, where it's a young Spock. He's in Vulcan
school. This scene is the number one scene that

comes to my mind when I think about how I want to use
it. And it's like a big open area and there's like

kind of concave little half circles. And there's a
different Vulcan kid in each of them. And they have

projected in front of them all kind of math and
stuff. And the machine is talking to them. It's

like, well, how do you define the area of
hypotenuse? What is the opposite of virtue? And

it's going through like all of knowledge and he's
sitting there having to answer. And that's the

nature of the schooling. Right? It's an amazing
scene. I'll send you the gift. And to me, I'm like,

this is the tool. This is the way, man, to level up in
your understanding. There's a blog post like many

years ago, probably 2010 called There's No Seat
Belts or There's No Speed Limit about how fast you

can learn talking with a very educated human being
who can like fast, fast forward you through the

stuff. So that's my angle. Do you think there's
going to be like a hybrid where we'll have like a

hard drive attached to us or something like that
where we're even speeded up even more? Like you can

just upload like an SD card and put it in your hard
drive that's attached to you and boom. Listen, I'm

pretty bearish on wetwear, like actual brain
computer interface. Yeah. I'm pretty bearish

just because no matter how much even the brain
scientists talk about like, yeah, we know how the

brain works. I find it, I don't think they know
anything. Even like the deepest people, they're

like, listen, the big secret is the brain seems
more like an antenna than a computer where it's

kind of sensing consciousness and responding.
Right. So it gets real weird and strange. So the

ability for us to kind of interface at the brain
level, like what we would love to have like Matrix,

I know Kung Fu. Hey, I wish, I wish, but
unfortunately, I don't think it's coming. But the

eye tracking on the Vision Pro from Apple is so good
that the people who've tried it said it's like

something from Black Mirror. Literally,
everything your eye sees, man, it is perfect. And

you're looking and just touching your finger and
boom, boom, boom, boom, boom, boom. Yeah. That's

crazy. Like I tried, I mean, I'm way, way late to the
game, but the Oculus for the first time like last

weekend. Yep. Yep. And it was like, I'm like, it's
incredible. Yeah. Like your heart starts

pounding. I had to take him off because I was going
on this roller coaster. He's like, you sick. Like

it's incredible. Like if that's, the Vision Pro is
just probably gonna be a hundred times better than

that. You know, I mean, I know it has a different use
case, but still it's gonna feel so real that it will

make your biological senses tick to that, which is
insane. I've never been really into the VR stuff

because I think human beings are embodied beings.
They need to be moving around. So I've been far more

interested in overlaying in the physical world
certain digital objects rather than sitting in a

chair and pushing joysticks to like go around.
It's like, okay. So that's why I'm more excited,

way more excited about what's coming next year
than a lot of the VR stuff that's already come.

Because I'm not a practically, I'm not really a
gamer, but when they're like, oh, different

workstations, different people and happening in
the same room, I'm like, okay, cool. I'm gonna make

that Vulcan training school. Yeah. Yeah. Yeah.
We'll have your son go to that school and he'll be a

genius here. He's gonna be a beast. How old is he
now? Seven. Seven. That's crazy. Yeah. I always,

you know, me last night about like Dalles and so how
do you spell Python? He picks up this stuff because

it's like written on my whiteboards. I talked to
him. Sometimes I roast him. I'm like, you can be a

fat lazy bum watching a YouTube cut videos, dude.
I'm over here doing design patterns on Python.

He's like, you know, so he kind of like copies a
little bit. So, dude, I was seven years old hearing

about decentralized autonomous organizations
and like programming languages. I've been pretty

dope. What's gonna happen to his son one day? No. I
don't know, man. It's like, how did you get that

last for us? Yeah. Yeah, it's awesome. And so, you
know, back and pull back to the article. So, just to

sum it up, people want to go read that. Where can
they find it? And if you want to just give a brief

overview on what's it about before they get into
that, that'd be super helpful. Sure. Most of my

modern stuff, I write on mirror, mirror.xyz. It's
kind of like crypto native where you can write

something on there. All that content gets posted
to IPFS, the persistent kind of content storage.

And then people can actually mint an NFT of that
article. So, like, kind of a patronage. And this

helps me too to realize the people who are vibing on
what I've written. There's a history there. So,

that's pretty cool, right? And so, if you go to
operator.mir .xyz, operators kind of became like

my handle on mirror, you know? And so, yeah, just
get into like the thesis. The thesis is human AI

relationships are more pressing danger than the
alignment problem or work displacement. Talk

about the first Eliza bought and how this danger
was showing from the very start and talk about how

the three things are going to cause the crossing of
the chasm between what we have now and what when we

see taking LLMs from just tools of utility to
pleasure, companions' pleasure and their

emotional intelligence, persistent identity
like this LM is a thing. It has a name, right? And

then the last is like a breaking out into a spatial
world where you look and like your AI is standing

there. You know, he's here. And this is not 10
years. This is next to your type stuff. And then

basically close it out to say it's already
happening. It's being rolled out for counseling,

companionship apps, and given a long enough time,
probably the only thing preventing some people

from accepting these as con conscious entities
will be a religion, religion of some sort. And the

debates over consciousness will seem academic.
The only thing that will matter is that our AI's are

real to us and they listen to us and they're there
for us and ultimately they tell us everything we

want to hear. That's incredible. Yeah, like we
could have brand new religions being created. You

know, there was an AI religion, someone created
that like a couple years ago. It's shut down

because I looked into it. But I think we'll see more
for that. It's already coming back. I mean, if you

look at the accelerationism as a movement, I'm
getting more and more links about this. And it, you

know, it kind of is a basket that contains both
crypto and AI stuff. And then I think there have

been some interviews where Sam Altman is talking
and they're on some like pedantic stuff. And he's

like, I don't think you understand what we're
making here. God. It's like, yeah, dude. It's at

that level. Yeah, he speaks. Sometimes he says
stuff like he hints that he knows way more than you

do. And you know, there's a story I've been
thinking about working this into like another

article, whatever, but I've been thought about
this for a while. It was like the original story of

the Tower of Babel. What's interesting is like,
you can really unpack that and think about AI as the

rebuilding of this tower. Because the original
Tower of Babel, the idea, this is what was, this is

what's said in the narrative for those listeners.
This is a story told in the Old Testament. And it's

the origin story for multiple languages that
result, ultimately, the Pangea, the, the, the

continents broke apart and how we end up with all
this different stuff. And that originally it was

one language among all the peoples, whether you
believe it or not, whatever. Is this the story,

right? Sure. You'd like a, like Jordan Peterson,
surely you can draw some enlightened ideas from

it. Okay. But the idea is that God comes down and
sees that they're making this tower to heaven.

That this structure that represents the full
capability of what mankind can do, completely

aligned with itself, with no conflict on a single
mission. And God comes down and he says, if we let

them continue, there's nothing that they'll want
to do that they won't be able to do. And they will

effectively be unbounded wickedness because,
man, it's fallen. If he can do anything, it's not

going to be a good thing. So God confuses their
language. So now suddenly, they're at odds with

each other. And now they don't have a common
language. So, you know, the whole thing gets blown

up, right? So it's an introduce discontinuity and
confusion into something. And so AI, if you think

about it, dude, it's a reunification, man, a
single language among all peoples, a reflection

of the consciousness of all the human species. And
like what that, what that, what that reflects, you

know, it's deep. Yeah, we can't, which we came
understand. Man, we got to have more and more.

We'll have you back on. Awesome talking to you. One
last question for you, Justice. What do you see as

you're always on a freaking cutting edge of things
like I mentioned before, do you see yourself sort

of going, doing more stuff in the AI realm, or
you're going to stick with crypto and in web three

and dows and things like that? What do you see for
your path? You know, there's a, there's a general

kind of professional role that a person has that is
their job. And then there's a more kind of open

ended, maybe harder to define curiosity and where
it drives you. Okay. My role right now is to

contribute to the decentralization of the
tokenized future. How do we decentralize

polygon, the consensus, the protocols, the value
generated from that, the treasury, how does that

fold back and create a self propagation of a system
and a network? And so that is very blockchain.

That's very Dowish. That's very web three crypto.
But to do that without kind of any curiosity or any

being informed by the advances in AI and LLMs is
just ridiculous. This is the most, both of these

are insane technologies and they have to feed into
each other. And so professionally, I'm in Dow's

crypto, I'm with Polygon working on a mission. I
was a broader kind of intellectual curiosity. I

love AI, LLMs. Ultimately for me though, what I'm
really trying to do is work in the incredible

capabilities of AI into my workflows. And so you
could sit back. There's a kind of unhealthy

prediction game that can happen where I'm like,
I'm going to say what's going to happen. I'm going

to say what's going to happen. And I'm like, well,
so if you're right, what do you get? Just what I told

you so. Is that it? The real win is to take an action
knowing that something will happen. And then if

you're right, then you're in a privileged
position. And so kind of building out these tools

in my own workflows to be more efficient, to be able
to engage in more deep work. That's the real

challenge for me. Because I mean, it's like
magical capabilities every week. You can't even

keep up. So it's like, okay, let's have some
concrete workflows and see what I can do. Improve

my content output, my social engagement, more
consistent learning, token engineering, all the

bonding curves. Some of that stuff, I'm like,
dude, I need to talk to somebody about this. This is

complicated as hell, man. And I'm have an expert
right here. Like making a chart. Yeah. And so just

kind of using that and having good patterns in that
way. Awesome. Yeah. I think that's, you fall into a

category I think most people are. They've got
their own, you know, nine to five job and that

they're passionate about, but they also can use AI
to help them out in their daily lives. So why not use

that to their bandit? I think, I think Balgi said,
strong thought leader on the network state and

stuff. As he said, you know, AI has come to the point
where you can't believe anything. Anything can be

simulated, right? But he's like blockchain is at a
point where now you can authenticate the

provenance of any individual thing. There was no
better time to have them both. Right. That's a

great point. It's going to be awesome to see how all
this plays out. But yeah, we'll wrap it up here,

Justice. Is there anything that you would like to
promote Twitter handles, your article,

anything? Now's the time to do it. I would say to
anyone, follow me on Twitter, Singularity Hack.

And then I would definitely encourage the
listeners to get your newsletter. I read through

the last one on the call center stuff and you had
like a link down at the bottom where you could try

yourself. And like awesome stuff. A lot of content
out there maybe is a little, is like all technical

and some of it is just like all like glitz and glam.
And you guys have like this match of technical and

get your hands dirty, not just some white papers.
And it's amazing. It's too cool. And sometimes you

got to have this constant drip to kind of get a sense
of the capabilities and what's out there if you

want to stay out of the curve. So I appreciate what
you guys are doing. Oh, thanks a lot. Yeah, that's

our law and form article every Sunday. We try to get
a little bit more serious on that one. So it's

great. 100 does an amazing job writing. He does all
the writing for us. So thanks to you, Hunter. Yeah,

thank you. Yeah, you know, you can subscribe to
that article or article, the newsletter that was

mentioned, it contains all those articles fry-ai
.com. Thank you so much for coming, Justice. We

appreciate it. Really interesting stuff. It is
fun talking about all this stuff, the

relationships and everything. Go check out his
paper. We'll put it in the link below. Thank you

guys so much. Pleasure to be on. I mean, it's
awesome.