The State of Enterprise IT Security

In this episode of the State of Enterprise IT Security podcast, Brad Bussie  gets into the rapidly evolving landscape of artificial intelligence (AI) within the realm of cybersecurity. The episode covers three critical topics: Google's initiatives to incorporate AI in cybersecurity, the collective efforts of tech giants to mitigate AI-generated election interference, and the challenges posed by Shadow AI in corporate environments.

Episode Highlights:

[00:00 - 02:49] The Role and Challenges of AI in Cybersecurity: Discussion on Google's proactive approach to fostering AI in cybersecurity, aiming to leverage AI for improved threat detection and response. Despite these advancements, there's an acknowledgment of the overwhelming challenge defenders face against AI-generated attacks, stressing the need for a collaborative effort to secure AI technologies from the ground up.
[05:03 - 07:01] The Response to AI-generated Election Trickery and the Importance of Collaboration: Highlight on tech companies, including major names like Adobe, Amazon, and Google, signing an accord to combat AI-generated election trickery, focusing on the deliberate efforts to deceive voters and a critique of the tech companies' cautious approach to AI governance in the context of protecting democratic elections.
[09:24 - 16:10] Shadow AI: Risks and Strategies for Mitigation: The segment transitions into discussing the prevalence of shadow AI within organizations and its implications for data security.

Key Takeaways:

AI can greatly enhance threat detection and response in cybersecurity.
Tech companies are taking steps to combat AI-generated election trickery, but more needs to be done.

Shadow AI poses risks to organizations and requires comprehensive technical controls and user education to mitigate.

Public and private organizations should work together to secure AI from the ground up.

Follow Us:

LinkedIn: e360 Solutions
Facebook: e360 Solutions
X (Twitter): @e360 Solutions
YouTube: e360 Solutions

About the Show:
The State of Enterprise IT Security podcast, hosted by e360, is your go-to source for making IT security approachable and actionable for technology leaders. Join us for insightful discussions and expert analysis on the latest cybersecurity topics and trends.

Visit us: www.e360.com

Catch the full episode for a deeper dive into these topics and to arm yourself with the knowledge you need to steer your organization towards a more secure future.

What is The State of Enterprise IT Security?

The State of Enterprise IT Security is the show for technology leaders looking for actionable and approachable guidance in the security landscape. The show features Brad Bussie, the CISO of e360, a premier technology solutions provider. In each episode, Brad covers 3 timely topics in each episode impacting the enterprise security landscape.

00:00 - AI allows security professionals and defenders to do something that they haven't been able to do before, which is to work at scale in threat detection, malware analysis, vulnerability detection, and fixing vulnerabilities, as well as instant response. But what I'm seeing is, by and large, defenders that are responsible for protecting corporate and personal data, they're just not prepared. They're pretty overwhelmed and feel generally unsupported when it comes to the new style of attacks and just the volume that's coming from AI-generated types of attacks.

01:02 - Hello. Hey everyone, I'm Brad Bussie, chief information security officer here at E 360. Thank you for joining me for the state of Enterprise IT Security Edition. This is the show that makes it security approachable and actionable for technology leaders. I'm happy to bring you three topics this week. First, new Google initiatives to foster AI in cybersecurity security. The second, tech companies sign accord to combat AI-generated election trickery. And number three, Shadow AI. So with that, let's get started.

01:41 - Now, Google has an initiative to foster AI and cybersecurity. And as a cybersecurity practitioner, I'm pretty excited about this. So they've announced the initiative and it's aimed at fostering the use of artificial intelligence in cybersecurity overall. So what does that mean? If I look at this, I think AI allows security professionals and defenders to do something that they haven't been able to do before, which is to work at scale in threat detection, malware analysis, vulnerability detection, and fixing vulnerabilities as well as incident response.

02:28 - But what I'm seeing is, by and large, defenders that are responsible for protecting corporate and personal data, they're just not prepared. They're pretty overwhelmed and feel generally unsupported when it comes to the new style of attacks and just the volume that's coming from AI-generated types of attacks. So to combat this, I feel, and others feel that public and private organizations should work together and the goal should be to secure AI from the ground up.

03:12 - And what's exciting is Google is doing just that. So they're continuing to invest in what they're calling an AI-ready network of global data centers. So they've got a bunch of those, they have 17 startups. If you look at all of the different acquisitions and things that they're working on, Google has startups in the UK, the US, the EU, and really the focus is around new AI for cybersecurity programs.

03:48 - Now the company also has something pretty cool called Magica, and it's open-sourced. It's an AI-powered tool for malware detection through things like file type identification. And this powers things like Google Drive, Gmail, safe browsing, and some other components that have been blended with an acquisition known as virustotal.

04:15 - So I think another pretty interesting thing is they're wanting to support advancements in AI-powered security overall. So Google is offering 2 million in research grants and strategic partnerships to support research. And they're doing it at institutions like University of Chicago, Carnegie Mellon, and Stanford.

05:03 - So second topic today, tech companies have signed an accord to combat AI-generated election. We're going to call it trickery. Now these executives, I'll just name a couple from, I think it was Adobe, Amazon, Google was there, IBM Meta, and Microsoft OpenAI. And you heard me throw a little shade on TikTok in a previous episode, but TikTok was there. They gathered at the Munich security conference and it was to announce a framework for how they plan on responding to AI-generated deep bakes.

05:51 - And what's interesting is these major tech companies, they signed a pact to voluntarily adopt, and I'm going to do this in air quotes. You're going to see and hear this. Reasonable precautions to prevent artificial intelligence tools from being used to disrupt democratic elections, not just in the US, but around the world.

06:16 - And if you ask me, I think they're taking a bit of a kid gloves approach to this. And the reason is the companies aren't committed to ban or even remove deepfakes. Instead, the accord outlines methods that they'll use to do things like detect and then label deceptive AI content when it's created or distributed on their own platforms.

07:01 - So I said a lot of things, and I feel like I didn't really say much of anything because based on that reaction from these companies, to me, they're not really doing nearly enough. Detecting and labeling is one thing. And I know when I watch most people on social media, attention span is super short and generally they're not reading something, it's just flashing by.

07:36 - So to put things in perspective, election deepfakes, they're already happening. So a robocall went out in New Hampshire and it was President Joe Biden's voice, and it was trying to discourage people from voting in the primary election last month. And by all accounts it was pretty convincing.

08:12 - Some think we should hold back on some of our AI capabilities, like hyper-realistic text to video generators and the full voice to face generation. It's a little hard though, because I think what we're trying to do is we're trying to maintain free speech as well as maximize transparency.

09:24 - Third thing we're going to talk about today is shadow AI. And I love topics like this. Even back when it was shadow IT, I think some of my friends back in the day called it business-led SaaS. And really it is a real problem for organizations. And what it is is employees are leveraging things that we call unsanctioned applications.

11:19 - So a couple different strategies that I would recommend are what I call comprehensive technical controls. So this is establishing things like network traffic monitoring, having a secure web gateway, doing things like endpoint detection, response, having those on endpoints and part of that, all good security things, but it's to pinpoint those unexpected AI related activities and make sure we can identify some AI software. I'm not even going to call it installations because a lot of these are just still web based, but it is possible for an organization to have a bootleg or pirated LLM that's imported into a data center and next thing you know, you've kind of stood up your own and that could be dangerous doing a staff survey.

12:19 - So conducting a survey and just understanding what are your users doing? What have they decided is helping them do their job? And it's interesting how much information you can get just from a simple one two question survey. Are you using Gen AI or LLM? I guess we're kind of past the point of the first question is, do you know what it is? Everybody pretty much knows what it is now. So are you using it? Second question, which one and or ones are you using? Making sure we are doing things like onboarding, due diligence.

12:57 - So when we're onboarding a third party vendor or partner understanding, are they leveraging AI and how are they doing it? So having some of those questions in the due diligence process and that will help identify some potential shadow AI risks, because maybe your organization isn't doing it today, but you onboard a third party, they're heavy users, and next thing you know, it's like a gateway drug. Your users are leveraging what they're doing.

13:28 - I think, as with anything, enforcing consequences, outlining. We've talked about having a governance program, but there's really no way to enforce governance unless there's some kind of consequence. So making sure that you're implementing consequences for the use of unauthorized AI, that sends a strong message to organizations, and I think it will help to deter some of that unauthorized usage.

13:59 - And I should have probably done this one first, but I did it last because technical solutions aren't always the answer. I think educating users is still very important. That's been debated for a very long time, whether we should protect users from themselves through technology. But I think with this one specifically, we left technology last on purpose.

14:23 - So using things like SaaS security platforms that can do things like automatically detect when things like business credentials are used to log into any tool. So for most of these gen AI large language models that are publicly available, they are leveraging a common set of authentication and authorizations, whether it's Microsoft, whether it's Google. Very rarely can you actually create an account with an email address. They're starting to standardize, which is actually good. It's helpful for organizations to make sure that this is something that's allowed.

15:05 - There's other tools out there that you can have a browser extension, and they will watch what's being typed into the prompts because often users can share information that is private. It's not to be put out in the public domain. Maybe they don't know that, so they're just putting that data into a prompt or they're ingesting it somewhere into an AI, this browser extension, and we can get into some of the technology in a later episode. It's definitely interesting because it helps to avoid, it's really like a data loss prevention exercise, but it's something interesting we can definitely get into a little bit more.

15:54 - So I would suggest adopting these five simple ways to identify and protect an organization when it comes to shadow AI. Thank you, everybody, for tuning in, and we'll see you next time