Leadership Sovereignty Podcast: Career Growth and Promotion

If you are putting sensitive company data into a free AI tool — stop. Ralph and Terry break down why free AI tools are not safe for company data, how data leakage could cost you your job, and what the trust but verify principle means when working with AI outputs.

Show Notes

If you are putting sensitive company data into a free AI tool right now — stop. That information is being used to retrain the model. And that could mean a data breach, a policy violation, and the end of your career at that company.

In Part 4 of the Leadership Sovereignty Podcast's eight-part GenAI series, host Ralph Owens and co-host Terry Baylor tackle the two biggest risks professionals face when using AI at work — hallucination and data leakage. Ralph explains exactly why the free version of any AI tool is not safe for company data, how to use sanitized inputs to protect yourself, and how to find out what your company's approved AI policy actually is before you make an expensive mistake.

They also cover the feedback loop technique — asking AI to critique its own output against your original request — and why prompt engineering is the single skill that separates the 20% getting real value from AI from the 80% who are not.

In Part 5 you will see exactly what prompt engineering looks like in action across real career scenarios — from job interviews to project management to difficult leadership conversations.

What you will learn in this episode:
- Why free AI tools are not safe for sensitive company data and what to use instead
- How data leakage works and why it could cost you your job
- How to use AI safely with sanitized data to protect yourself and your organization
- What AI hallucination is and how the trust but verify principle applies
- Why the feedback loop technique dramatically improves every AI output

This episode is for you if:
- You are using a free AI tool at work and are not sure if it is safe
- You want to understand your company's AI policy before you make a costly mistake
- You want to use AI more confidently and strategically at work
- You are ready to move from basic AI user to someone who uses it with discipline and skill

***********************************
🎧 Get your free AI Toolkit: https://www.leadershipsovereignty.com/ai

***********************************

📖 Chapters:
00:00 – Welcome & AI video creation demo
01:09 – Trust but verify: AI hallucinations explained
02:27 – Pro tip: Feedback loops with ChatGPT
03:29 – The #1 risk: Data leakage in AI tools
06:22 – Protecting personal and company data
07:38 – Free vs. paid AI tools and policies
10:46 – Normalizing AI safety, like learning to drive
14:16 – Story: A student turns AI into career advantage
18:22 – Why AI creates opportunity, not just risk
21:26 – Key takeaways: prompt engineering, data safety, and career growth
23:15 – Closing and listener survey


***********************************

📺 Watch This Episode on YouTube
Click here to watch the full video of this episode.

***********************************

🌐 Visit the Leadership Sovereignty Hub
Everything you need to lead with clarity, confidence, and strategy — in one place.

https://www.leadershipsovereignty.com

- Listen to or watch our full episode library
- Download free leadership frameworks, guides, and tools
- Read in-depth show notes and guest takeaways
- Submit your leadership questions or episode topic suggestions

***********************************

📩 Get Every Episode Delivered to Your Inbox
Join the Leadership Sovereignty newsletter and get new episodes, leadership insights, and exclusive resources — plus get instant access to our free AI Toolkit: the practical guide smart professionals use to think clearer, communicate better, and make smarter decisions at work.

*Sign up for our newsletter*

***********************************

⭐ Help a Leader Find This Show
If this episode added value to your leadership journey, take 60 seconds to leave a rating or review. Every review puts this content in front of another professional who needs it.

🟣 Leave a Review on Apple Podcasts
🟢 Leave a Rating on Spotify
✍️ Leave a written Review

***********************************

🔁 Know Someone Who Needs to Hear This?
Share this episode with a colleague, mentor, or team member who would benefit from this conversation. Leadership grows when we invest in the people around us.

***********************************

📲 Follow the Conversation
Stay connected and join the leadership conversation on social media.

▶️ YouTube
🔗 LinkedIn
📸 Instagram
🎵 TikTok
🧵 Threads
🐦 X
📘 Facebook

***********************************

🎤 Work With Ralph & Terry
Looking for dynamic speakers for your next event or leadership summit? Ralph and Terry bring executive-level insight, real-world leadership strategy, and an engaging presence that moves audiences to action.
*Connect with us*

***********************************

💰 Support Independent Leadership Media
This show is independently produced to keep the content unfiltered and audience-first. If these conversations are helping you lead better, your support directly funds the next conversation.
★ Support this podcast ★

***********************************

📄 Full Episode Transcript

What is Leadership Sovereignty Podcast: Career Growth and Promotion?

The Leadership Sovereignty Podcast, hosted by Ralph E. Owens II and co-hosted by Terry Baylor, is a career acceleration platform

Leadership isn’t just a skill — it’s a career strategy for professionals seeking growth, influence, and promotion.

Hosted by Ralph Owens, Chief Information Officer, and Terry Baylor, CEO of a healthcare technology startup, the podcast delivers practical, real-world leadership strategies you can apply immediately. Each episode focuses on increasing visibility, navigating corporate dynamics, preparing for promotion, and leading beyond your title.

Expect short, high-impact conversations designed to help you think, act, and position yourself like a senior leader before you’re given the title.

You’re not just an employee. You’re the CEO of your career.

New episodes are released bi-weekly.

And I think Terry, you, you were
just mentioning right before the

show that Jim and I now has an
ability to take text and create

a video.
Is that?

Right, yes, yes, yes.
So I was planning around I'm

like, hey, what are all the
tools I get with my, you know,

Google Workspace subscription?
And I'm looking through I'm like

most of these I use I'm like
vid, what is that?

And so I start playing around
with it last week and creating

some, you know, content and
yeah, I said, hey, create me a

one minute video on HIPAA and
and and privacy.

And I was astonished with what
it delivered.

Like wow, that this.
That this is the future.

Y'all, y'all have to get on
board with this.

You can't just say, oh, you
know, that's for the kids or

whatnot because you look up at 5
years and this is part of your

job and you have no idea how to
use it right.

So that's what he meant by I
don't believe what anyone says

because I may ask you a question
and you're going to give me a

very confident answer based on
what you thought I said.

But let me probe that a little
further to say, OK, so this is

what I heard you say.
Is this what you meant?

No, that's not what I meant at
all.

Exactly.
Exactly.

Exactly.
In in human terms that in in in

ChatGPT terms, that is an
hallucination.

That person gave you a confident
answer back.

Good.
That's a great example, yes.

Right.
So, but we have to drive in why,

why did we most likely ask that
follow up question?

Because of our experience and
our knowledge and our

understanding on what it is
we're talking about.

And we like, oh, I heard that,
but I don't know if that was

quite what I was looking for or
they didn't seem as confident in

that answer.
And they were kind of all

around, you know, kind of all
around the question without

really answering it directly.
So it's no different right than

when you talk with a human being
the first time someone gives you

a quick answer back, you're
like, OK, you know, let me dig

into that a little more.
It's the same thing guys.

So that that principle of trust
for verify it it still exists.

That's perfect.
That's perfect.

It's perfect.
It's perfect.

And here's a here's a pro tip,
pro tip for using these these

Gen.
AI tools.

And I don't know how many people
really have caught on to this.

I've caught on to it because of
the workflow that I do and it's

been extremely helpful feedback.
OK, what, what, what do I mean

by that?
When you ask it to give you

something specific, right?
You take the time to create a, a

really well thought out prompt
and you ask it to give you

something very, very specific.
The next question I ask it is

give me feedback on what you
just gave me based on what I

asked you.
It will review it and say, you

know what, you're right, I
missed this.

Let me correct that for you,
right?

And I'll go through that
feedback loop until it's

perfect, right?
So, but most people don't really

realize that Chachi BT can do
feedback on itself, right?

You know so pro tip, put that
into your prompts, right?

So like the prompts that I have,
well, I may have like 4

different sections and I'll say
pause when you finish Section 1

and don't proceed until I
acknowledge, right?

And it'll give me what I know or
want and it'll say, OK, would

you like to move forward?
I'll say no, I want you to give

me feedback on what you just
what you just produced.

And it'll say, oh, you know
what?

I missed this and let me correct
that, right?

Don't take it at face value the
first time, right?

Take the time to review it
yourself, right?

And you can even use the tool
itself to give its own feedback

to improve the output, right?
So feedback loops are really

important.
But yeah, no great, great trust

for verify that that that really
sums it up, Terry.

I think that's, that's, that's,
that's that's killer.

And let's let's talk about the
second biggest, probably bigger

than hallucination risk of using
Gen.

AI, which is data leak, data
leakage.

So data leakage is the number
one risk in using GI in using

Gen.
AI because if you input

sensitive information, sensitive
data into Gen. and shift AI

client names, financial
strategy, that information could

be stored, logged or used in
ways that compromise privacy,

especially on free and unsecure
platform.

So what?
So what does that mean?

If you go and sign up for free
version of ChatGPT or free

version of Gemini, it can do all
these wonderful things that we

just talked about, but that the
Gen.

AI model itself is what's called
a large language model.

It basically think of it as a
big database of information.

It's constantly training itself
to get better based on the

information that's in that
database.

When you use free tools,
anything you put into that tool

becomes part of that database
that is now retraining and

relearning on.
So what does that mean?

That means that if you put
sensitive information in there,

right?
Because a lot of people do this,

they're careless.
They'll just put in, you know,

documents for their company that
have sensitive information in it

to get a response.
And they don't understand that.

Now that data is part of the
large language model.

And if other people who are
using the tool ask the same

similar questions, that your
data will now come up.

So now you have a data brief.
Please hear us when we say this.

Do not do this, OK?
And it's very clear to

understand when you use a free
tool that data is not private.

So what you have to do is you
have to use it with sanitized

data, right?
You have to use it with data

that doesn't have any
particulars or specifics in it,

right?
Once it gives you the framework

of the output that you want, you
can copy that in and change it

when you put it into, you know,
your, your company systems,

right?
And I think there Terry, you

well before we move forward, any
thoughts on that point there?

Yeah, basically, man, it's
personal identifiable

information, right?
Really, we just, we just have to

protect it.
And you know, it's the, it's

it's that sensitive information,
right?

And yeah, just just be diligent,
right?

Due diligence, Due diligence.
Yeah, yeah.

So, so let's talk a little bit
more about how you can use Gen.

AI tools with, you know, that
sensitive information.

So each tool has its own level
or each platform has its own

level.
You have the free version where

everybody can get in and use it,
right?

Again, none of that data is
protected, right?

So never put sensitive
information into that.

But then you'll have the paid
version, right, which they'll

give you tools to protect your
data, right?

You know, so Copilot, which is
one of the leaders in the

industry, especially in
business, if you have a

Microsoft subscription with your
company, that Copilot data may

be secured, right?
You know, always check with your

company to find out what it's
Gen.

AI policy is that is critical.
You need to understand, OK, what

is my company saying is the
approved Gen.

AI policy for me to use with my
company's data?

That could be the difference
between a data leak and

termination, right?
This is very, very serious.

You do not want to put your
company's data in a free tool

because now you're accountable
for a possible data leakage if

you did put like Terry said, PII
in there, right?

But check with your company
again on their G and AI policy.

Hey, do you, can I use G and AI
here?

Can I not use it AI here again,
What you do at home, obviously

is, is is your own personal
space, but you just want to be

very mindful that you do not
save sensitive information in a

a public Gen.
AI tool.

Now, here's the thing.
There are some tools again, like

the paid version of Jack GBT is
something that I that I pay for.

It gives me the ability to tell
it do not use any of my data to

retrain your model.
OK, so I pay to make sure that

that doesn't happen, but the
free version doesn't have that

capability, right.
You know, so definitely use at

your own risk, but be smart
about it right?

Do not put sensitive information
into to to Gen.

AI tools unless your job is
giving you a sanctioned tool to

be able to do that with SO.
Yeah.

And I think ref only thing I'd
add to that too, right.

And so some, some companies have
the ability to have data loss

prevention policies that won't
allow you to do that.

But here's what will happen,
right?

If you hit that particular
control, then yeah, you may get

an e-mail.
Hey, we recognize you, you know,

were trying to place, you know,
some IP or PII into a, you know,

some kind of AI tool that is non
sanctioned.

Now, you know, more than likely
in that scenario, you you know,

you're probably going to get a
warning.

But what we want to do is
equipped you on the front end to

understand how to use the tool.
There's no different than when

you get behind the wheel of a
car, right when you first

started, you know, being trained
to drive, you know, you stop at

a four way intersection, you let
the person to the right go,

right when you come to a yield
sign, that's not slow, that's

not a that's not a rolling stop.
It's really like, you know, stop

and look, then go.
So that's that's how what we're

doing here, right?
These are just normal basic

exercises that you do when you
get on the road of AI, right?

So what what I want to do is
kind of normalize it in terms of

this is what you do when using
these tools, right?

Now that we're introducing you,
we're also introducing you into

the framework of how to be safe.
No different than when you

started getting online, right?
Don't click on, you know, don't

go to websites that you don't
know right there, there.

So this is no different than
that, right?

So there, there, there are modes
and operations methods on how

you carry out even on being on
Facebook, right?

There are things that we taught
you or folks taught you how to

effectively be on Facebook
without being scammed, right?

All these kind of things.
So this is this is not for you

to be fearful or afraid.
Feel like, Oh no, I'm being

equipped.
So that's that's that's the main

point I want to bring out here.
That's great that's great that's

great so think about it like
this data leaks don't happen

because AI is evil they happen
because people forget that AI is

not private by default right so
you know think before you type

you know protect your data you
secure settings right because

gene AI is powerful, but you are
responsible for what you share

right So don't don't forget that
that's.

A very important point and then
so, you know, lastly, what what

we wanted to cover was again,
tool that we use, you know,

almost daily, which is ChatGPT.
You know, again, to Terry's

point, you can there's ChatGPT,
there's copilot, there's Google

Gemini, there's Claude, there's
a few others out there, right?

You have to find the tool that's
best for you.

ChatGPT was the first to the
game, right?

They were the ones who created
Gen.

AI 1st and and and beat everyone
else to the market.

So they have a more mature
product.

You know they have three
different versions, a free

version, right, which you can
just go to chatgbt.com and sign

up.
It gives you basic features,

text generation, no web, no file
tools, no memory.

If you sign up for the plus
version at the at the time of

this recording, which was about
$20.00 a month, you get web

browsing, file uploads, image
analysis, memory,

personalization, access to Jet
GBT plug insurance, best use for

daily pro use, content creation
and learning.

That's the one I do personally.
I, I actually create images with

it, right?
And it's pretty good when you

get good with your prompt
engineering and you can tell it,

Hey, I want this person to be on
the left side.

I want the words on the right
side to be in bold.

I want them to be centered.
I want them to be around this

size font.
I want them to be this color,

right, you know, or a gradient
or in and I want this person's

image to to have this type of
expression.

You know, when you can get you
very granular down into your

prompts, you create images like
that, that are they are so good

that it would take me 10 times
the time to try to create that

on my own.
And when I say that I'm talking

about the nuance, I'm talking
about the shadow on the on

certain side of the face, right?
That I wouldn't be able to just

naturally generate myself right
so it it can definitely, you

know, supercharge some things
there and I think Terry, you you

were just mentioning right
before the show that Jim and I

now has an ability to take text
and create a video is that.

Right, yes, yes, yes.
So I was planning around.

I'm like, hey, what are all the
tools I get with my, you know,

Google Workspace subscription
and I'm looking through, I'm

like most of these I use I'm
like vid, what is that?

And so I start playing around
with it last week and creating

some, you know, content And
yeah, I said, hey, create me a

one minute video on HIPAA and
and and privacy.

And I was astonished with what
it delivered.

Like, wow, that this, that this
is the future.

Y'all, y'all have to get on
board with this.

You can't just say, oh, you
know, that's for the kids or

whatnot because you look up at 5
years and this is part of your

job and you have no idea how to
use it right.

You got it.
You got to embrace this stuff.

You got to embrace this stuff.
So, so great, great point there

to ref.
Let me share a quick story.

So Noah and I as a, as a part of
my, you know, I want to be a

great dad.
This one, you know, I said, man,

hey, Monday mornings are yours.
And so, you know, I, you know,

we go out and we play 9 or he
plays 9.

I just kind of walk and and
watch anyway, so we meet some

just fantastic people every time
we go out and play.

So I met a couple new grads this
past week and gentleman who

graduated with a degree in
accounting and he was like, you

know, man, I was kind of
concerned because I just

graduated with accounting degree
and, and AI is out there and am

I going to be obsolete?
And I'm like, again, same thing.

I share the same thing.
Your expertise will be needed.

You know, it'll be a while
anyway because of a lot of these

companies, the processes are
manual.

And so your ability to beef up
and, and, and, and be prepared.

And he was like, I'm glad you
said that because he's got, you

know, he's got some friends who,
who, who know who owns companies

and here's and I, I love what he
shared.

He says, yes, I've been going in
and volunteering are, you know,

kind of interning with them and
looking at their processes and

automating workflows.
He's like, I'm teaching myself

as I go.
I'm like I said, son, you, you,

you're doing it the right way.
You I could not, I could have

gave the guy hug, to be honest
with you.

I said, because that is that is
it right?

Your your, your inquisitiveness
about it didn't stop with just

let me, you know, see what I can
do or let me see you know, if I

no, he put himself in a position
in a real, real world scenario

where he could actually take his
skill set went to a place.

Again, this was a, you know, not
everyone might have this

opportunity, but he took
advantage of that opportunity to

actually go in, look at
workflows, leverage his skill

set and what he knows about
numbers and accounting and

finance and start to automate.
I said, here's the benefit for

you.
It's because he doesn't start

his job until until October.
You get to go in there with a

portfolio of AI prompts that you
can say, here's my AI portfolio.

And it's a matter of just taking
the step, you know, getting in

there, jumping in because again,
those who take the initiative,

right, those who have the I said
for nothing else, when the

conversation comes up, you will
be able to articulate in an

educated manner and have
conversation about the

creativity and how you've
leveraged it.

Have you seen other business
leverage it, right?

And that's going to open up
opportunity for you.

So I mean, that's right.
I mean, it was a great example

of it, right?
Firstly out of firstly out of

college.
You know, I'm glad you brought

that up, Terry.
People don't understand that

those who want to go in the
track, technical track, right?

This is for you. 2 things.
Your ability to upskill yourself

in AI is going to set you apart.
So Meta, formerly known as

Facebook, has been hiring a
super AI intelligence team,

right?
So yeah, it may not have come

across most of your radar, but
they have Mark Zuckerberg, the

owner.
He wants to create the A-Team of

the most elite AI producers in
the world.

He contracted to pay somebody.
I believe it was 100.

It was either 100 or $300
million / 10 years.

This.
What about that?

Right.
The the, the, the industries

already understand that this is
where it's going.

OK, to Terry's point in his
example with that young man,

every company in the world has a
ton of inefficient processes.

There's a, there's a sea of them
because we've all been doing

things manually for so long.
If you could tap into AI and

automate and, and, and be able
to produce efficiency, OK,

efficiency, right?
Efficiency is a metric that's

used to, to, to determine how
profitable a company is.

If you can automate, you know,
processes that are inefficient

and make them efficient, save
time, right?

You'd be able to do these types
of things.

That is value.
That is value that people are

willing to pay you for today,
right now.

And AI is so new that I mean,
you can go get AAI certificate

right now and it'll put you
light years ahead of anybody

else that's trying to do this on
their own, right?

The opportunity is there, right?
That when I came into iti came

into IT in the late 90s when
Windows NT was just going into

businesses, right?
They, they were still using

green screens, you know, at that
time.

And so when they were switching
over from green screens to

Windows, that's when I got into
technology.

And I was able to ride that wave
in my career, right, because I

stayed on top of the technology.
The businesses were moving that

fast.
The education system couldn't

catch up because they hadn't
learned how to educate people on

this stuff.
I was getting technical

certifications in this stuff.
The business is like, hey, I

need that right now.
We'll pay you this, we'll pay

you that.
There.

There is another wave that's
happening right now and it's

with AI, right?
You know, so I, I love to hear

that.
I love to hear that example.

Yeah, and and and wrap to your
point, right, that Windows, the

beauty of what you did is as
Windows transitioned over the

years from PCs, then it went to
virtualization.

The foundation of it was still a
Windows OS, right.

Then it went into Citrix, right.
Well, with Citrix, then

virtualization, that foundation
that you learned in the early

90s that carried you 20 years
bro.

It did, it did and I, and I
recognize that it at the time

and I like to say I was smart
enough to see that, oh man,

there's this certain gap in time
and I'm going to jump into it.

It's not how it happened for me.
I was just chasing my passion.

It just happened to be at that
time.

Now, now 20, you know, five
years later, I look back, I see,

man, I got in, in a wave.
I got in a wave that actually

opened up door after door after
door.

So I could see that pattern
repeating again with AI, right?

So if you if, if for those young
folks out there who are

interested in getting into
something that's going to be

cutting edge, that it can make
you a lot of money.

And I'm talking about a lot of
money.

I just told you the top people
are getting paid up to like $300

million / 10 years.
No, but no, I've never heard

anybody making that kind of
money in a job.

Maybe a business owner, but not
a job, right?

If you want to get into some
significant upside on

compensation, AI is your way to
go.

You just got to find your entry
point, right?

But so so let's just talk about
the takeaways for today.

Gen.
I Gen.

AI does not think you do.
So you got to use it wisely,

right?
Learn prompt engineering.

Prompt engineering is needed to
unlock the real value in Gen.

AI.
You really need that protect

your company and your career
from data leaks, right?

You got to be smart with your
settings and your tools and what

you input into them, right?
And then choose a tool that

meets your requirements, right?
Take a look at all of them.

Try them out again, don't put
any personal sensitive

information in them, but try
them out, you know, Start

learning the prompt engineering.
You can do that, you know, for

free.
The courses out there.

We'll, we'll, we'll sit, we'll
put some links in the show notes

for you to check out.
And there's just so, I mean, you

can go to YouTube and look up
prompt engineering and Start

learning right now, right?
And there's nothing standing

between you.
There's no barrier to entry, I

guess is what I'm trying to say.
But find the tool that works

with and matches your workflow
and your privacy needs.

But any any closing thoughts on
that too?

Yeah, no Gen.
AI, man, that is a great place

to start.
There are some other models

coming out, but what we wanted
to focus on today was Gen.

AI.
And you know, as time moves

forward, we'll introduce the
other generative, I guess

there's a gentic AI and that
does have an ability to be

creative.
But we just wanted to give you a

intro and a great place to start
because the majority of the

opportunity right now for what I
would say just the entry level

point is this generative AI.
So it's a great place to start.

Absolutely.
Well, thank you as always.

We appreciate you supporting the
show and listening.

I always want to ask, you know
we want to hear from you, right?

Go to our website,
leadershipsovereignty.com/survey

and take our listener survey and
help us to improve the show for

you so we can make it better to
support the show.

You can always give a donation,
right?

Leadership
sovereignty.com/donate right?

And until the next time, stay
safe.

We'll talk to you.
We'll see you on the next show.

Thank you for listening to the
Leadership Sovereignty Podcast.

If this content blessed or
helped you in any kind of way,

support us today by subscribing
to our YouTube channel, clicking

the like button for this
episode, and sharing this

content with others that you
think it will help.

Until next time, stay safe,
peace and blessings.