The State of Enterprise IT Security

In episode nine of the State of Enterprise IT Security podcast, host Brad Bussie, Chief Information Security Officer at E360, covers the utilization of open AI systems by hackers from nations like China and Russia for cyberattacks, the introduction of Pindrop's real-time audio deep fake detection tool named Pindrop Pulse, and the potential implications of the Biden administration's new executive order on AI for cybersecurity.

Episode Highlights:

[00:03.116] In episode nine of the State of Enterprise IT Security podcast, host Brad Bussie, Chief Information Security Officer at E360, delves into the use of open AI systems by hackers from nations like China and Russia for cyberattacks, emphasizing the practical rather than exotic application of AI in these activities.

[02:29.518] The episode discusses the introduction of Pindrop's real-time audio deep fake detection tool, Pindrop Pulse, and its capabilities in identifying synthetic audio, showcasing a significant advancement in combating phone-based fraud and deep fake technologies.

[07:17.87] Bussie unpacks the Biden administration's new executive order on AI, focusing on its implications for cybersecurity. The discussion highlights the executive order's directives towards AI development, critical infrastructure protection, and the establishment of guidelines for AI usage across various sectors, albeit with challenges in enforceability and the need for updates in the legal framework.


Key Takeaways:
1. Hackers' Use of AI: Hackers from various countries are exploiting open AI systems for cyberattacks, using them for tasks like email drafting and document translation to increase productivity.
2. Pindrop Pulse: A new tool that can detect audio deep fakes in real-time, helping to combat phone-based fraud and potentially aiding political campaigns and call centers in ensuring communication security.
3. Biden Administration's Executive Order on AI: Outlines initiatives for AI development, critical infrastructure protection, and the establishment of guidelines for AI usage, with enforceability dependent on agency action and legal framework updates.


Follow Us:

LinkedIn: e360 Solutions
Facebook: e360 Solutions
X (Twitter): @e360 Solutions
YouTube: e360 Solutions

About the Show:
The State of Enterprise IT Security podcast, hosted by e360, is your go-to source for making IT security approachable and actionable for technology leaders. Join us for insightful discussions and expert analysis on the latest cybersecurity topics and trends.

Visit us: www.e360.com

Catch the full episode for a deeper dive into these topics and to arm yourself with the knowledge you need to steer your organization towards a more secure future.

What is The State of Enterprise IT Security?

The State of Enterprise IT Security is the show for technology leaders looking for actionable and approachable guidance in the security landscape. The show features Brad Bussie, the CISO of e360, a premier technology solutions provider. In each episode, Brad covers 3 timely topics in each episode impacting the enterprise security landscape.

The research right now is showing that

hackers that are either working for nation

states or nation states themselves, they've used

OpenAI systems in the creation of attacks.

And this research comes from OpenAI and Microsoft.

But instead of using AI to generate these exotic

attacks that we were all worried about, the hackers,

they've used it in pretty mundane ways.

All right.

Hey, everybody, I'm Brad Bussie, chief information

security officer here at e360.

Thank you for joining me for the

state of Enterprise IT Security Edition.

This is the show that makes it

security approachable and actionable for technology leaders.

I'm happy to bring you three topics this week.

First, hackers from China, Russia and

other countries are using OpenAI systems.

Second, following up on our deepfake topic from

last episode, Pindrop launches a real time audio

deepfake detection tool, which they call Pindrop Pulse.

And third, what will the Biden administration's new

executive order on AI mean for cybersecurity?

So with that, let's get started.

So first topic today, hackers from China, Russia and

other countries, they are using OpenAI systems, and honestly,

they're using other Gen AI systems as well.

But the research right now is showing that

hackers that are either working for nation states

or nation states themselves, they've used OpenAI systems

in the creation of attacks.

And this research comes from OpenAI and Microsoft.

But instead of using AI to generate these exotic

attacks that we were all worried about, the hackers,

they've used it in pretty mundane ways.

So they've leveraged OpenAI and others to draft

emails, to translate documents, to debug computer code.

So I find this a little bit interesting because

everybody's worried about AI taking over the planet.

But I've said this before, AI

to me is augmented intelligence.

And I think what the attackers are

showing is that truly is the case.

They're using it just like everybody else.

They're trying to be more

productive in what they're doing.

Granted, we don't want them to be more

productive, but in essence, they're leveraging the gen

ais to augment things that they're already doing.

Now, the companies didn't say which of the

open AI technology and tools were being used.

And what I like to see is that when

they noticed this, they being OpenAI, they revoked access,

and they shut down the ability to use the

platform for these hacker groups and nation states.

Now, it's interesting, because you think about this

and you go, well, wait a second.

OpenAI, they've limited where customers could sign

up for accounts, but it's really not

that hard to fake where you are.

And some of the sophisticated culprits, they

just evade that kind of detection.

They mask their location, they come through a VPN.

I mean, there's ways that they're going to do

it, but honestly, they're signing up just like everybody

else, and they're putting in a credit card, like

they're doing the things that all of us are

doing to sign up for the service.

I find it interesting, Microsoft, they're

tracking something like 300 hacking groups,

and those include cybercriminals, nation states.

And what they're leveraging is OpenAI to help track and

disrupt the use of Gen AI for these hacking groups.

So I think it's interesting that the attackers are

using it and we as defenders are using it.

And I think the thing that is making

it a little bit harder for some of

the defenders is the open source AI technologies.

It makes the task a little bit

harder to defend against because the attacks

are coming from so many different places.

They're not as predictable, and it's something

that we're going to have to watch.

So one person did ask me, they said, well,

wait a second, if the attackers are using this

technology, did that attack, and the whole composition was

that used in the attack against Microsoft.

So if you remember, there was a

russian group that attacked the Microsoft executives.

But when they did their recon and their

research, they found that they didn't actually use

OpenAI or the Microsoft solution to attack themselves.

So I found that interesting second topic today, following

up on last episode, we talked a little bit

about deepfakes, and there's a company, Pindrop.

They launched a real time audio deep fake

detection tool, which they're calling Pindrop pulse.

So they unveiled this solution.

And what they actually did I found pretty

interesting is they proved that the voice cloning

tech that was used to simulate President Biden's

voice and mislead New Hampshire residents.

They were able to pinpoint that it was

simulated, they were able to pinpoint the technology

that was used, and they were able in

real time to flag it as a simulation.

So what I find interesting is they can examine

an audio sample for signs of synthetic content.

And it's kind of a crazy

number when you think about it.

It's like 150 milliseconds to do this.

So what they do is they do a comprehensive detection.

So the tool, it categorizes synthetic audio,

and they look for, I think it's

like four different fraud categories.

And it covers a wide range of techniques, from something

like a simple replay where someone said something, you capture

the voice, and then you replay it all the way

down to a true aigenerated speech using samples.

I would say it's a pretty versatile application.

And what it's doing right now is it's

aiming to combat the phone based fraud.

And that is something that is going, we're going to

see a lot more of that during election season.

And this is something that's going

to be helpful for call centers.

It's going to be, I hope, leveraged

by political campaigns, and then it's a

rising attack surface, fake audio crimes.

I think we're going to see more and more of that.

So this kind of technology is pretty exciting because

you heard me last time throw a little bit

of shade on some of the social media and

just social networks on not policing their own platform.

So when I see technology like this, it makes me excited

because that means that there is going to be hope.

If the socials aren't going to do it themselves, then

there will be third parties that can do this.

So I'm excited because I think there's

going to be some further integrations with

security products and Pindrop and others.

And I know that there's

already been some early adoption.

There's some clients that have been disclosed and

they've already pulled in pulse into their operations.

And the big ones is securing call

center communications to make sure that someone

calling in is really the person.

Because if some of you have ever done

the voice authentication with your bank, if you

can fake that, that's a concern.

So call centers are starting to pull this in

to make sure that that kind of stuff is

a little harder for attackers to do.

So the third topic today, you hear

me talk about AI all the time.

That's the time we live in.

So third topic, what will the Biden administration's

new executive order on AI mean for?

You know, we all hear about this stuff.

Some of us read it, some of us listen to it.

I'm a big fan of tell someone, then tell them

what you told them, and then tell them again.

And that's when you start to

retain what you are hearing.

So for those of you that don't know

what the directive is or what it outlines,

I'm just going to tell you real directive.

I'll just say some of the key components, it's

the development of AI tools and test beds.

So the Department of Energy, for example, they're

tasked with developing an AI model evaluation tool,

and this is to include safeguards.

We're trying to mitigate risk and facilitate

the creation of some foundational models.

And that's really focused on scientific research.

We're looking at critical infrastructure protection.

So the national security affairs and the

OMB, they're coordinating their efforts to develop

guidelines that the federal agencies will mandate

for critical infrastructure protection.

You heard me talk about this in a previous

episode, securing our critical infrastructure, it's a big deal.

And nation states, they are definitely after

our power, energy, our water and communications.

So third thing, security reviews

and public data access.

I would say this is focused right

now around the chief data officer.

There's a council, and they've been tasked

with creating guidelines for security reviews and

public access to federal data.

There is now a mandate around AI in the

patent process, and ultimately what that's going to look

like and how, if something is invented leveraging an

AI, how are you going to patent that?

So they're trying to put some things

around it, training and risk mitigation.

So there's a bunch of different

departments, and that includes the homeland

security, labor, health, human services.

All of them have been directed to develop

training programs, best practices strategies, and to mitigate

anything that is risk related around AI.

Law enforcement has some mandates.

So the attorney general, they're looking

at best practices for law enforcement

and their ability to leverage AI.

There's a need for AI talent in government.

So how are we going to bring up the next generation?

And it's not like we're going to bring up

the next generation of generative AI, so to speak.

It's the people that can create prompts.

It's those that understand the machine learning side of

all of this that can create more sophisticated models.

Because we're not to the point

where AI is making itself yet.

We're still in the augmented time.

It's going to be a little bit interesting once we do

get to the time where AI is doing its own thing.

And I'm sure some of you are thinking Terminator and

Skynet, I think about that stuff all the time, so

we'll just kind of see how that goes.

I don't know what to do about that right now.

Then a lot of reports and guidelines.

The thing is like 43

pages from the Biden administration.

So to be honest, I started reading it and made it through

it, and then I said I was kind of distracted, so I

actually asked Chat GPT, I took the order and I uploaded it

and said, would you please summarize this for me?

And it was actually pretty good.

It gave me a lot of good high level things,

and it helped to reinforce a lot of this.

But what I was left with at the end of

all of this was, is this order enforceable yet?

And what I came up with is

that this is very agency implementation heavy.

So it really does hinge on

the actions of various federal agencies.

And as I think all of us know, some of

them move fast, some of them move slow and some

of them move at the speed of stop.

So it's really up to the agencies to

make the necessary steps to implement the directives.

So until they do that, the orders provisions

really remain in the planning and development stages

and they could stay there for a while.

So we'll just have to see how that goes.

There's really a lot of existing legal frameworks

that I don't think were ready for AI.

So there needs to be some updates

and changes to some of the frameworks.

And I think that's the only way to make

things enforceable, and it really is going to depend

on the current legal powers of the agency.

I would say that AI, securing AI, governing AI,

it's an ongoing development of standards and policies.

NIST is still in the very early stages

of their AI policy and their AI governance.

It's good foundational knowledge, but it's just

going to take some time to mature.

You heard me talk about in

a previous episode some congressional action.

Who and how is the House using

AI, how's the Senate using AI?

And really, I think it's going to take some maturity

in both areas to be able to enforce the aspects

that we've talked about and roll it into legislation there.

In all things, I found in government

a good balance between public and private

sector and the compliance therein.

So if you look at like NIST 171 and

some of the compliance frameworks that have been created

so that a private entity can talk to a

public entity, can exchange information safely, and there are

safeguards that are put in place to facilitate that.

Same thing needs to happen from an AI

governance and security perspective, and all of that

stuff is still in the works.

So enforceable, are we ready?

Not quite yet.

There is a time frame for implementation.

It hasn't really been set in stone, but I

think 2025 is a good target for some of

these things to actually become enforceable, I think it's

honestly going to take a lot longer than that.

So thank you for tuning in and

I will talk to you next episode.