Cars, Hackers & Cyber Security

As artificial intelligence becomes an integral part of the automotive ecosystem, its dual nature-both as a powerful security tool and a potential attack vector-demands urgent attention. In this episode of Cars, Hackers, and Cyber Security, we explore how AI is reshaping the threat landscape for connected and autonomous vehicles.

Drawing from insights from our blog How AI Is Reshaping Automotive Cybersecurity, we examine real-world use cases where AI strengthens detection, response, and compliance-and where it introduces new vulnerabilities. From automated threat modeling and deepfake firmware attacks to ethical dilemmas and evolving regulations, we unpack the complexities facing automakers and suppliers worldwide.
Discover how PlaxidityX is enabling manufacturers to leverage AI responsibly, aligning advanced analytics with UN R155 compliance, and reducing risk in increasingly complex supply chains.

Chapters:
00:00 Welcome & Episode Overview
01:10 How AI Is Transforming Cybersecurity in Automotive
04:00 AI as a Security Enabler: Detection, Prediction & Automation
07:15 AI as a Risk: Adversarial Attacks, Data Poisoning & Manipulation
10:00 Regulatory and Compliance Implications (UN R155, ISO 21434)
12:30 PlaxidityX Solutions: Responsible AI for Secure Development
14:30 Final Takeaways & Call to Action 

What is Cars, Hackers & Cyber Security?

As cars become smarter and more connected, the demand for top-tier automotive cyber security has never been higher. With expert insights from PlaxidityX, a leading automotive cyber security company, we’ll guide you through the challenges and solutions protecting millions of vehicles worldwide. Whether you’re an industry expert or just curious about how cars are secured in the digital age, this podcast comprehensively looks at how cyber defenses are developed, tested, and deployed.

We don’t just talk about the technology; we talk about what it means for you—the driver, the manufacturer, the tech enthusiast. We explore how automotive cyber security solutions are applied in real-world scenarios to safeguard everything from onboard infotainment systems to critical vehicle control units.

Tune in to gain a deeper understanding of how manufacturers are staying one step ahead of hackers and ensuring a more secure, connected world.

the pace of change with artificial intelligence, uh, especially generative AI, it's just incredible, isn't it?
It really is. It feels like it's transforming almost every industry.
Yeah. From software development, even art, things we wouldn't have thought possible just a few years back.
It's a fundamental shift. Definitely. Um, everyone's looking at how AI can automate things, speed them up,
make processes faster, basically.
Exactly. Accelerate them, let people focus on, you know, the finer details, the refinement. And today you're joining us for a deep dive specifically into how this AI revolution is hitting the automotive industry.
Right. And we're narrowing the focus even more.
We are we're zeroing in on a really critical area, cyber security,
which is becoming more and more vital as cars get more connected.
Absolutely. So our mission here is to really unpack how AI is changing vehicles.
It presents these huge opportunities, right, for efficiency, better features,
but and this is the crucial part, it also introduces is brand new risks, significant ones.
Yeah, it's definitely a double-edged sword.
So, by the end of this, you should have a much clearer picture of this complex uh fast-moving landscape.
And to help us navigate this, we're drawing on insights from Aaron Ley.
He's the co-founder and CTO of Plexity X.
That's right. They used to be known as Argo Cyber Security, founded back in 2014, so about 11 years ago.
Okay. So, they have some serious experience.
Oh, yeah. 16 years in cyber security overall and a solid decade focused specifically on automotive. They're deep in this space
and their solutions are actually out there in cars on the road.
Millions of them already. And they've got contracts for over 70 million more vehicles.
Wow. So they see the whole picture.
Definitely. They do complete life cycle solutions. Things like automating the Terra process.
That's threat assessment and remediation analysis.
Exactly. All the way through to vulnerability management and uh XDR for tracking security events. They're really on the front lines.
Okay. So let's maybe broaden out for just a second. AI's general impact,
right? Like we said, almost every sector is looking at automation, acceleration,
letting people focus on refining things. It's a common theme.
It really is. And that naturally applies to making cars, too.
Makes sense. The whole vehicle development process, lots of stages, lots of activities
that can be automated or, you know, significantly improved using AI.
But what's really unique in automotive, you were saying?
Well, it's not just about how the car is made. AI is also enhancing the actual capabilities inside the vehicle.
Ah, right. The driver experience itself.
Exactly. That's a key distinction for this industry.
Like uh Mercedes announcement.
Yeah.
Back in 2023.
Yeah. About replacing their standard voice assistant with an AI one,
promising more natural conversations, more functions,
right? And then you have BYD more recently saying they'll use Deepseek AI for driving assistance,
which immediately makes you wonder, okay, does this mean better performance? Is it more available? Cheaper maybe.
Exactly. Those are the big questions. It has really significant implications,
which brings us squarely to that double-edged sword you mentioned, AI and security.
Yes. Let's start with the positive side, AI as an enabler. So, just like elsewhere, it can speed up and automate security tasks a lot.
So, beyond just making thread assessment faster, does it actually improve the quality? Like, can it find things people might miss?
It potentially can. Yeah. Or at least make the process more thorough. But it's also huge. for detection, monitoring, investigation
both inside the car and uh back at base offboard
both. Absolutely. It can make existing security tools in the car more accurate, provide more, let's say, meaningful information.
Meaningful how? Like filtering out noise.
Precisely. Think about the sheer flood of data coming from vehicles. It's overwhelming,
right? Hard to see the real threats.
Exactly. So AI could help manage that data right in the vehicle. Store only what's important. Prioritize context
and only report stuff. That's genuinely suspicious.
Yes. Which tackles those big challenges of storage space and network bandwidth and helps security teams focus.
Okay, that's the AI as ally side. Powerful stuff. But then there's the other edge of the sword.
The darker side. When AI becomes the weapon
or the target itself,
both attackers can definitely leverage AI. Think about crafting super convincing fishing emails,
much harder to spot,
or finding software vulnerabilities faster, even speeding up writing the code to explo exploit them.
So it gives attackers a boost too.
A significant one. And then there's the critical point AI itself becoming the attack surface,
especially these big generative models like chat GPT or Gemini.
Exactly. And a major attack vector here is something called prompt injection.
Okay. What's that exactly?
It's where attackers basically trick the AI. They feed it inputs, prompts that make it behave in ways it's not supposed to,
like getting it to talk about forbidden topics
or perform actions it shouldn't, manipulate users. maybe even leak private information it has access to.
Can you give us an example? Make it a bit more concrete.
Sure. There was some research um looking at an AI email assistant. Its job was, you know, summarize emails, maybe draft replies.
Okay. Helpful tool,
right? But the researchers sent it an email with hidden malicious instructions embedded within it.
Cleverly disguised within the text, so a human might not notice, but the AI processed them.
And what happened?
The AI was tricked. It started doing things like deleting other emails,
providing false summaries, modifying messages.
Whoa.
Even forwarding sensitive information from the inbox to the attackers.
And the key thing,
the attackers never needed direct access to the inbox themselves.
The AI became their unwitting inside agent.
Precisely. It's a chilling example of how this manipulation can work.
That email assistant scenario is uh quite alarming.
Yeah.
And it leads us right into a realworld case study from Flexity X.
Yes, this came out of their penetration testing work on a vehicle from a Chinese OEM.
So, this wasn't theoretical. This actually happened during testing.
Correct. The research team had gained some initial access to the car systems.
Okay.
But it wasn't enough. They couldn't, you know, trigger safety systems or do the really high impact things they were testing for.
So, they hit a wall initially,
sort of. Then they discovered something interesting about the vehicle's voice assistant.
The thing you talk to.
Yeah. It had an internal interface using MQTT.
MQTT. That's a messaging protocol. like an internal chat system for car components.
Exactly. A lightweight one often used for that. And through this interface, they found they could send commands to the voice assistant.
Okay. So, what could the voice assistant do?
This was the shocker. It had really extensive permissions, high privileges,
meaning
meaning it could send arbitrary CAN messages onto the vehicle's internal network.
Ah, and CAN messages, that's the core language the car systems use to talk to each other.
That's right. Brakes, engine, steering, infotainment, it all runs on CAN.
So if you can send any CAN message you want,
you can achieve well pretty much everything. Control almost any function in the vehicle.
That is a huge vulnerability through the voice assistant.
It was a real eye openener for everyone involved.
A seemingly harmless component with massive power.
I think everyone listening is probably side eyeing their car's voice assistant right now.
Maybe.
So let's connect this back. We have AI assistants becoming more common like the Mercedes example,
right? More capable, more integrated.
What does this case study tell us about the potential risks there? If prompt injection, like in the email example, could be used.
It raises some really serious questions. What if attackers could trick that AI assistant in your car?
An assistant that might have access to your location, cameras, internet connection.
Exactly. Imagine it being tricked into giving you wrong navigation directions constantly.
Annoying, but maybe not critical.
Or maybe it continuously shares your live location. with an attacker.
Okay, that's worse.
Or think about this. You park your car, you walk away, the AI assistant gets a malicious prompt and decides to unlock the doors and start the engine,
facilitating theft. Wow.
These aren't far-fetched scenarios. There are serious possibilities we need to consider now as we implement this tech.
It really underscores the need for careful design. So, how do we build a secure AI future for cars? Then, where do we start?
Well, the foundation has to be standards and regulations, implementing them. ing compliance.
We got things like the EU AI act, right? Adopted about a year ago.
Yes. And more recently, the ISO 8800 standard specifically for AI in road vehicles.
But I mean, AI is moving so incredibly fast. Are these standards keeping pace or is it always a game of catch-up?
That's a really critical point. Regulation often lags innovation.
So, what else can we do?
We have the advantage, maybe the privilege of watching other industries grapple with AI first.
Learn from their mistakes
hopefully. And a huge takeaway across the board is don't give AI excessive permissions
like that voice assistant case study.
Exactly. Limit what it can do. And then you have to think really hard about what it might do even with limited permissions. Anticipate misuse.
Okay. Limit permissions. Think about misuse. What about practical security measures during development?
Robust mechanisms are non-negotiable. Things like strong input filtering and validation are critical.
Making sure the AI isn't easily tricked by bad inputs.
Precisely. And then just relentless thorough testing, more testing than usual probably to make sure the AI is as robust as possible against new kinds of attacks.
And what if despite all that something goes wrong, an AI model starts behaving badly in the field?
That's crucial. You absolutely need the ability to remotely update it or at the very least disable it.
Like an emergency stop button for the AI,
especially in security critical situations. It's an essential safety net when dealing with potentially unpredictable AI behavior.
Okay, makes sense. Build it carefully, test it thoroughly, have a kill switch
in essence. Yes.
Now, let's flip the script again. We've talked a lot about the risks, but how can AI be our ally? How can it help solve automotive cyber security challenges,
right? Because it definitely can. There are two massive problems in the industry right now that AI is well suited to address.
Okay, what are they?
First, the talent shortage. There just aren't enough cyber security experts globally
and And it's worse in automotive.
Even more dramatic. Yeah.
Right.
Because the demand is so high everywhere else, it's tough to hire and retain experts.
And the ones they do have are overloaded.
Totally. Especially with regulations demanding more and more. Second big problem, data overload.
We touched on this. The sheer volume of security data from vehicles.
Massive increase from the cars themselves, from the backend systems. It's incredibly hard for humans to wade through all the noise, the false positives.
Do you find the actual ual meaningful security events.
Exactly. It's like finding needles in enormous hay stacks.
Okay, so talent shortage and data overload. How does AI help there?
Well, AI excels at processing vast amounts of data, finding patterns, finding meaningful insights much faster than humans.
And it can automate the boring stuff,
the tedious repetitive tasks. Yes. Which frees up those scarce human experts to focus on the complex highle problems.
So it's not about replacing experts, at least not soon.
Not in the near future. No, it's about augmenting them, making them more effective.
Okay, let's get practical. What are some use cases? How can AI help in, say, the development phase?
Imagine AI assisting with security design. It could look at system specifications
like blueprints for a new car system,
right? And automatically start the threat assessment, suggest ways to mitigate risks, define security requirements, maybe even describe potential security solutions.
That sounds incredibly useful, speeding things up.
And it could recommend or even help implement security testing too.
You mentioned Plexidity X has a tool for Terra automation.
Yes. Autodesigner. It automates a lot of that threat assessment and remediation analysis. And they're working on more AI features
like generating data flow diagrams, DFDs.
Exactly. Generating those diagrams directly from architectural descriptions. Saves a huge amount of manual effort.
How much time are we talking?
They're also thinking about a kind of cyber security expert chatbot.
A chatbot for the security engineers.
Yeah. Something that could help them during the terror process. process, identify gaps, offer advice, give recommendations, basically accelerate their work significantly.
What kind of savings are possible?
Their internal estimate is over 50% time and effort savings just on Terra and potentially much more as AI gets more integrated.
50%. That's huge for stretch teams.
It really is. It builds capacity.
Okay, so that's development. What about enhancing detection and investigations? You mentioned alert fatigue.
Yes, that's a massive issue in security operations centers, SOC's. Just too many alerts, mostly noise,
making it easy to miss the real threats.
Exactly. Now, research shows AI models can be really good at spotting anomalies in vehicle networks.
How? By looking at communication patterns, sensor data.
Both. Some studies claim really high accuracy, like 9900% detection for certain attacks.
Wow. Is that practical to put in the car though?
There are challenges, complexities to implementing complex AI detection directly in the vehicle, but the potential Especially for the SOC analyzing data offboard is significant.
So maybe the AI helps the human analysts in the SOC.
Precisely. Take Plexity X's XDR solution. It has a security expert chatbot built in
like the one for Terra, but for SOC operators.
Yes. To help level one and level two operators. It can explain the potential impact of an alert or even run complex investigation steps in seconds.
Things that would normally take hours or need a more senior expert.
Exactly. The operator can then just verify the AI findings and decide on the response much faster.
That sounds incredibly powerful for speeding up response times.
It really is.
And you also mentioned optimizing how security events are managed inside the vehicle itself,
right? So AI doesn't necessarily have to do the primary detection in the car. It could manage the output from existing sensors,
filtering the flood again.
Yes. Managing that deluge. AI could decide, okay, this piece of information is irrelevant. Discard it. Save storage. Save bandwidth
and only send the really interesting stuff back to the SOC.
Exactly. And it could even add context. Maybe describe the type of attack it thinks is happening or what stage it's in.
So the SOC gets less data but smarter, more actionable intelligence.
That's the goal. Making incident response much more efficient.
Okay, so wrapping up our deep dive here. It seems clear AI offers huge potential benefits.
Definitely. It can significantly enhance the security and ultimately the safety of vehicle fleets. Faster, better detections
and it empowers the human the security experts, the developers, letting them focus on the really hard problems.
Makes them more efficient, allows them to tackle complexity that requires that human ingenuity.
Good. There's always a butt.
There is. We absolutely cannot forget the risks. AI is also an attack vector.
So, the key message is awareness, proactive defense.
Absolutely. Keep those potential risks front and center. Implement protections before these things become realworld problems in deployed vehicles. Don't hate to be attacked.
Which leads to a final thought for you, our listener, to maybe ponder on your own.
As vehicles become more autonomous, largely driven by AI.
How does that change what we even mean by a security incident? And what's our human role going to be in responding when the car itself is making more decisions?
Yeah. The core insight really is that AI and automotive security isn't about replacing people. It's about um rep prioritizing human effort,
freeing us from the routine to tackle the complex.
Exactly. But it also demands we fundamentally reeval evaluate our security assumptions in this new AIdriven world.
It's definitely a fascinating fastmoving space. Balancing innovation with vigilance is absolutely key. We hope this deep dive gave you a good handle on this critical intersection and maybe spark some new thinking about the road ahead.