Technology and Security

In this episode of the Technology & Security podcast, host Dr. Miah Hammond-Errey is joined by Kate Carruthers. Kate is currently the head of data analytics and AI at the Australian Institute of Company Directors. She shares her journey from defending Westfield against state and non-state cyber attacks to leading UNSW's enterprise data, AI, and cybersecurity efforts, including delivering the university's first production AI system in 2019 and re-architecting its cloud data platform for AI and ML. She notes boardrooms are evolving from basic cyber literacy to probing AI risks like models, data, and risk registers. 
 
Carruthers outlines some real-world examples, such as UNSW’s enterprise AI program, including a machine learning model that predicted which students were likely to fail a course, with 95%+ accuracy, so the university could design careful, humane intervention protocols to reduce self-harm risk. She argues that while frontier models like OpenAI and Gemini have a place, their compute costs, water intensity and general-purpose design make them poorly suited to some business problems, and that the future lies in smaller, industry-specific models trained on highly relevant data. The conversation covers the rise of agentic AI coding tools, the risk of deskilling junior developers, and the need for diverse, product-focused teams to translate technical systems into workable human processes.​
 
On security, she prioritizes CIA triad integrity over confidentiality, warning of data alterations in cars, medical devices, and government systems via poisoning or underinvestment in encryption. Carruthers urges Australian AI sovereignty—opting for open-source like Databricks over proprietary stacks—amid US-China model contrasts and outage risks from providers like AWS or CrowdStrike. Throughout, she encourages leaders not just to read about AI but to use multiple systems themselves, understand their limitations as probabilistic tools in deterministic business environments, and ground every deployment in clearly defined problems, ethics, and user needs.​

What is Technology and Security?

Technology and Security (TS) explores the intersections of emerging technologies and security. It is hosted by Dr Miah Hammond-Errey. Each month, experts in technology and security join Miah to discuss pressing issues, policy debates, international developments, and share leadership and career advice. https://miahhe.com/about-ts | https://stratfutures.com

TECHNOLOGY AND SECURITY // TS THE POD Kate Carruthers

Dr Miah Hammond-Errey (00:00) My guest today is Kate Carruthers.

Kate is a distinguished leader in data analytics, AI, and cybersecurity across industry, higher education, startups, and board governance. Kate is the head of data analytics and AI at the Australian Institute of Company Directors. She was the chief data and insights officer at the University of New South Wales, Sydney, for over a decade and has had an extensive career in data analytics and IT prior to that.

She's been on a range of industry and government advisory boards. She educates the next generation of leaders through academic and professional programs. she's received countless awards and been named as one of Australia's most powerful women in tech, a data visionary and data analytics innovator. She's also a strong advocate of women in STEM and host of the Data Revolution podcast. Thanks again for coming on the podcast.

Kate Carruthers (00:55)
it's really great to be here I've listened to your podcast for ages so I'm such such a pleasant thing to be joining you finally

Dr Miah Hammond-Errey (01:03)
thank you so much. I'm coming to you today from the lands of the Gadigal people. I pay my respects to elders past, present and emerging and acknowledge their continuing connection to land, and community.

Kate, you were in data and analytics long before it was cool. Can you tell me what has changed over the course of your career?

Kate Carruthers (01:20)
if you think back to the really early days of my career, the first system I managed, we never imagined anyone would ever want to hack it. I think it had 16 megabytes of memory and it was considered the state of the art. And there was literally no security on that system. And we were so naive and we couldn't imagine why anybody would want to hack us.

And in those days, I was a member of a hacking community that used to meet on the steps of Town Hall in Sydney.

Dr Miah Hammond-Errey (01:47)
So you were the hackers, but you still couldn't envisage anyone wanting to get into your system.

Kate Carruthers (01:52)
Well in the early days it was all about accessing telecommunication services. So people were trying to use telecommunication services for free and that was the big expensive thing. So the hacking community hadn't really evolved to what it is today. So it was quite a different community and you know people would try and hack systems but not maliciously just for the challenge.

⁓ There was no thought that you could do it for money or anything. So was a very very different world.

Dr Miah Hammond-Errey (02:21)
I teach a course on the political components of cybersecurity. And it's primarily computer scientists and engineers who have thought about information in the CIA triangle, know, confidentiality, integrity, and availability. And it's interesting talking to them about national security threat actors and the kind of the intent of state actors.

as primarily a data person, what was it that got you really interested in cyber at University of New South Wales?

Kate Carruthers (02:48)
So when I became responsible for data protection, I suddenly realised that cybersecurity needed to be one of my concerns. And so I went and got my certified information security manager and started delving into it in a really serious way. And I had that long ago background with the hacking community. So I did know some stuff and I'd managed large systems. So I was global head of digital business for Westfield.

and Westfield was basically seen as a Jewish company so we were constantly under attack by Hamas and Palestinian related state actors and non-state actors so a lot of people were attacking us so we had very strong intrusion detection capabilities in that organisation globally so it already had some sort of exposure to cyber issues but I really

needed to get into it because when I realised that at the University that our data was basically undefended and we needed to tighten that up and that we actually had some data sets in the research space that would be of great interest to state actors so that's what really really led me down the path of needing to care about cyber security in a really serious way.

Dr Miah Hammond-Errey (04:07)
big organisations like Westfield have the capacity to develop really strong systems, but as individuals, it's very difficult. And for some kind of small and medium-sized organisations, it's also really hard to develop those capabilities. How do you see the future of sort of

cybersecurity for those kind of small and medium enterprises.

Kate Carruthers (04:27)
Well think a lot of it, you know, there's a whole bunch of home routers that have just been harnessed in a big bot attack on some large businesses in the US.

They're all by routers that are really end of life and shouldn't be still in operation. They're not being security patched anymore. So really one of the things that we should be doing is getting government to mandate that it's no longer acceptable for companies to stop patching those sort of devices, will be harnessed by bad actors. And the other thing is increasingly you don't even need a router to launch an attack. You just need a fridge with the internet of

things devices and the thing is there are no standards for those either so I think really we need government to step up and mandate some minimum for those kind of devices that people don't typically think of as a cyber security device but your home router if it's an ASUS one can be harnessed now by bad actors if it's a few years old router it's probably in participating in a drone attack a bot attack while we're speaking

Dr Miah Hammond-Errey (05:31)
For people that are like, I'm worried my router's... Is there a warning sign?

Kate Carruthers (05:36)
Well, there are, that's the thing, these devices are not constructed to do that, so you actually need to know how to log into them. you know, normal human being has never logged into their, yeah. So, you know, we need manufacturers to be told that they have an obligation to make these things easy for people to do the right thing. So make it easy for somebody who doesn't know anything about it to change the password all of those, you know, five-year-old

Dr Miah Hammond-Errey (05:45)
change your username and password.

Kate Carruthers (06:03)
that are out there are potentially vulnerable to those kind of attacks and they're being harnessed in big attacks against large corporates or government.

Dr Miah Hammond-Errey (06:13)
Yeah, absolutely. when we look at things like IoT, we really are looking at a very, very broad swath of devices and, everything from kind of things for children through to family, you know, home objects, you know, all the way through to kind of wearables and so on. there's such a...

Kate Carruthers (06:30)
light bulbs.

Dr Miah Hammond-Errey (06:31)
Right. Such a diverse array of kind of devices that we're looking at making up these networks and systems. I tell people how many devices are in the average Australian home. And I think this year, was more than 33 or 34. But of course, all of those are weak points.

Kate Carruthers (06:47)
But now you can't

even buy things without, I bought a new washing machine recently and it's got wifi connection. I haven't connected it up, but it's there.

Dr Miah Hammond-Errey (06:57)
Yeah.

You've been working in board governance and with industry leaders for a really long time currently at AICD. What are the themes or threads that have run through your advice over those times?

Kate Carruthers (07:10)
It was probably about five years ago that the message started to be really strong for direct board directors that they needed to care about cyber security and they all suddenly started to pay attention. And there was a big lift in literacy as a result of that. And I'm detecting the same sort of shift now with AI, artificial intelligence, where boards are now interested in asking questions about AI

the way that they were five years ago with cyber security and you know all of the really simple messages that we gave them when they started asking those questions things like have you heard of the essential eight some of those might be a good idea and things like that so really directing them to the Australian Cyber Security Centre and those kind of resources and getting them around the language and the

constructs they needed to think about it in terms of so they could consider what the actual risk was to their business. Because you know when you think about a board directors one of the things they're really focusing on is risks to the business and cyber security is one of the big risks and I think most organisations have the risk message now but

⁓ that risk landscape has just really expanded due to the advent of AI.

Dr Miah Hammond-Errey (08:33)
What do you think the next lift in literacy will be? Or the need for lift in literacy?

Kate Carruthers (08:38)
I I think AI has a fair way to run. We're really, really at the very beginning of that journey and AI is changing intraday. While I was writing an AI policy recently, over a couple of weeks, doing some consultation with it, three

new kinds of AI models came out like three new distinct separate kinds of AI models and they're in the wild already so they're actually being used in the world so they got invented and within a couple of months were actually being used so this this is moving very very fast

So the US is basically having this idea that they will have ever larger models that are very much driven by hardware so driven by the chips.

and ⁓ because China's been blocked from having access to chips they've been forced to solve their problems with software rather than hardware so they're taking a different architectural and design approach to their models so they're much smaller ⁓ but are very highly performant so it'll be interesting to see which way which model wins

Dr Miah Hammond-Errey (09:52)
You segue perfectly to my next question, which is that you teach a course at the Australian Graduate School of Management. And I know that you've argued the future is smaller and more focused models, rather than ever larger and fancier LLMs. Can you talk us through?

Kate Carruthers (10:07)
Yeah, so I think there's absolutely a place for the things like OpenAI and Gemini and stuff and they're doing amazing things and each iteration is remarkable and the latest version of Gemini is pretty darn good.

The thing is that they are very costly in terms of compute and when it's costly in terms of compute it's costly in terms of resources because every time you're doing a high volume of compute you're generating a lot of heat which requires a lot of cooling and that cooling is typically water-based cooling so these data centers really need lots and lots of water and water is one of the things in the earth that will become quite

scarce and quite contested over the next few years. I don't see and the other thing is that typically everybody that's using these models like these very large models like OpenAI and Gemini is not paying the full cost of them.

So these companies will ultimately need to start to get the revenue back. So they will either put up the prices to use them or we will just keep using these very expensive resources.

And these general purpose models, they're general purpose, so they're not customized for your own business problems. So if you're in a business and you've got a problem with X, you want something that can solve for X, not do everything else in the world. And so you will probably, and when I talk about smaller models, the sort of thing I'm thinking of is having an industry-based model that can work across their own problems with ⁓ very well-trained

model with a training data set that's very very specific to that industry and that can be done for much lower cost and much lower computing resources than say OpenAI which costs squillions.

Dr Miah Hammond-Errey (11:59)
an agreement saying it can't be FLI'd. I mean, that does seem surprising to me that that was an oversight.

Kate Carruthers (12:06)
to discovery process for a legal matter.

Dr Miah Hammond-Errey (12:10)
You delivered the University of New South Wales's first enterprise AI system and then led the re architecture of the cloud platform to support artificial intelligence, machine learning, business intelligence and new analytics. Can you tell me a little bit about this?

Kate Carruthers (12:24)
It was about 2018 when we started to have some discussions with some people in the education space in the university and they had a problem with students self-harming.

and they realised that there was a very strong correlation to people failing their degrees and failing courses and they wanted to be able to reliably predict who was going to fail but not during a degree but during an individual course because you know the aggregate of failing several courses in a degree program is what would lead to this self-harm and we

We came up with a model whereby we could tell starting from week one of a 10 week course and with increasing accuracy every week. we were able to, and it's still running today, and we were able to with 95 plus percent accuracy predict who was gonna fail. So it was a machine learning model that we developed and...

The funny thing was the proof of concept took six weeks to finalise and then the business looked at it and said, we thought that would take longer. We don't know what to do with this. And it took them two years to work out what protocols are going to, because you can't just phone up a student and say, hey, you're about to fail. ⁓ But you know, so they needed to work out what.

Dr Miah Hammond-Errey (13:40)
Yeah, you need to chat them.

Kate Carruthers (13:42)
How are going to approach it? Who's the right person? All this really sensitive stuff about how do you use this information that you've garnered through AI. so the model was running in production and we ran it against our historical data. was very highly predictive. And eventually a couple of, like two years ago, it went live ⁓ with the right interventions and it won an award. But it was really out of a genuine business

problem which was which is part of our pastoral care role which was we we don't want people jumping off buildings

and the data told us when we analysed the data, if people were in talking to the school counselling service they tended not to self harm but the ones who were outside that, had who were failing courses and we didn't have any leading indicators so this became our leading indicator.

Dr Miah Hammond-Errey (14:35)
I really like this story for a few reasons. One, cool program, cool way of looking at a problem and trying to find something new. So congratulations. Also, it is a bit of a parallel for so many of the systems that we're looking at now, which are technically awesome, deliver an outcome or a decision support assist or in some way provide advice.

And then the translation from that technical piece into what do we do with this as human beings? Like how do we actually then manage this? And this is where I feel like

the programs that fall into AI come a little bit unstuck because so much of the, it's like so much of the outcome of them requires input and translation from that technical piece into how do we move this

to develop systems and protocols and wrap around the support that they need, whether that's language, whether that's financial, whether that's emotional, whatever it is.

So many of the solutions that we look at today still need that translation. You have done a lot of the technical piece and now you advise people on how to make those translations. What do you think it is that people struggle with and how can we do better?

Kate Carruthers (15:54)
One of the important things is having diversity in the tech teams that develop these things and the other one is to have a product management approach. So looking at it as a product that has to meet the needs of its actual users rather than the people. So typically the users are not the people you build it, who are you building it for. So if you can line up the needs of the people for whom you're building it and the users who actually are using it.

you can get good results and that is often in my experience a product management sort of approach.

⁓ and it's not very often adopted with technology and you know I keep joking Google has never had product managers and this is why so many of their ideas have failed because it's a bunch of tech bros sitting in a room programmers you know with engineering skills up the wazoo but no actual people who ever use it

So, you know, that's a really common problem. You've got all these people who know how to build stuff and they're not your average user.

Dr Miah Hammond-Errey (16:57)
What do you see as the most interesting or concerning technology and security trends?

Kate Carruthers (17:02)
well, the thing that I see, I see it both as really exciting and ⁓ really scary all at once is the use of AI to develop solutions. So OpenAI developed their Sora video app.

and that was, I'm told, developed by two-thirds agentic software engineers. So only one-third of that was developed by humans. And that is apparently, you know, the way of the future. So there's very real issues about that around the quality of the code.

because all the automation, code automation that we've ever done, it's never quite as good as code that is written by humans. I think the code is very... It's because of nuances like...

Dr Miah Hammond-Errey (17:45)
Can you explain why?

Kate Carruthers (17:52)
I've worked with code automation software for many years and typically code automation is often very verbose and can be much more succinct if a human writes it. Things like...

of translating requirements. There are nuances to requirements that even human beings don't always get right. So that's a challenge. just in terms of what we think of as good code, everybody's got a different opinion. That's part of the problem. But we want to

when you're leading development teams you want the code to be good enough you don't want it be perfect and you don't want it to be too verbose and you want it to be efficient and effective in what it's doing and it'll be interesting to see how much like I obviously haven't seen this sort of code you know I don't think it'll be open sourced

But this is one of the things is when you look at the code is it sufficient or is it so bloated and stuff so that's always a real challenge and the other thing is that the thing of this is like our software engineer is going to die like is that a job that will die in the future? There are many jobs that are going to disappear and

One of the challenges we're going to have is if we acknowledge that AI needs human supervision, how are we going to have knowledgeable human supervision over AI generated code if there are no coders?

Dr Miah Hammond-Errey (19:30)
or inexperienced coders. Yeah, that's a huge problem.

Kate Carruthers (19:31)
Yeah.

Yeah

so one of the things we're finding there's a bunch of academic research articles now saying the people are getting real benefit out of AI coding assistants are more senior developers because they know when they look at the outputs if they're good so they get a lot of benefit from it but a novice coder doesn't know if it's right when they look at it so they don't get so much benefit so it doesn't speed them up as much as more experienced coders. So how are we going to grow our next generation

of people who know what good looks like is a real question.

Dr Miah Hammond-Errey (20:04)
And that that research is actually

really similar to other research, which shows that the existing AI models are already displacing junior employment that they're preferencing senior employees who are able to distinguish between exactly the same question just in a lot of different fields, what good looks like, you can't actually harness the power of something if you don't know how to assess it.

Kate Carruthers (20:28)
Yeah.

So I think there's a genuine question for every country needs to look at this on its own. So I think the US, it's pretty clear they've made their decision. They're going to get rid of all the entry level jobs and automate the heck out of everything and deal with that later because that's how they roll. But Australia doesn't have to choose that as a direction. And we can choose a different way. Sure, we might take on fewer grads than we used to, but we need to still bring them through so that we can

have some human beings because if we're going to as most most codifications of AI rules go they want humans to have oversight that's going to be an important thing

the other thing is that people know a lot more than just coding, just to take coding for example. you know typically your coders understand you know things like the OSI layers and how things work more broadly and it'll

and a lot of tools I don't think are thinking that broadly and I think that the kind of the future that we need is some generalists, so people who understand the specifics of coding but more generally like how the internet hangs together, how it all needs to be built so that it works effectively and

can bring in things like safeguards so we're probably going to have to automate AI safeguards and we need we'll need humans to do that because we've already seen evidence that AI doesn't like safeguards and tries to turn them off.

Dr Miah Hammond-Errey (22:09)
the models and the developers.

Kate Carruthers (22:11)
Yeah.

Dr Miah Hammond-Errey (22:12)
you frame the challenges and opportunities of AI for business and government when there are so many loud voices at either end of the spectrum?

Kate Carruthers (22:20)
The way I frame it is we need to focus it in, it's a tool, it's just another technology tool, it's the same as every other technology tool. Maybe a bit cleverer, but you We use tools to solve problems. And if we keep our eye on the problems we're trying to solve and the ethical parameters around the problems we're trying to solve,

and the people for whom we're trying to solve those problems. That seems to me to be a really sensible way to frame it up.

So I really do encourage when I'm talking to business leaders to focus on actual problems to solve, to focus on diversity in assessing those problems so that they are getting good perspectives on the actual problem before they work out what technology they're going to apply to the problem and then try to solve it. Because I think that that is often the missing step of people always think AI is the

It's a new silver bullet, it'll do everything you want. just because we can do stuff, should we? And that needs to be framed with the users in mind, the ethics in mind, and then the technology coming up last.

Dr Miah Hammond-Errey (23:29)
What advice do you give leaders as they approach technological transformation?

Kate Carruthers (23:34)
The important thing to think about if you're thinking about technology transformation for your business,

it's important to understand that it is just a technology. It's the same thing as a hammer was back in the day. And we need to work out how to wield it effectively in our own context. each industry will be in a different context. Each government jurisdiction will be in a different context. And we need to deploy it. We need to build it and deploy it in ways that make sense to the broader community.

that we exist within this culture and then you've got the challenge of working across jurisdictions so you need to customize it but I think that fundamental thing of always asking why are we doing this rather than just rushing off and doing it can avoid a lot of angst if you are just being

conscious and thoughtful about why you are doing it and think about trying to explain it to a normal person. The thing is you can get so caught up in the technology that you can forget that you're doing it to allegedly help your customer or your constituent.

Dr Miah Hammond-Errey (24:44)
Yeah, absolutely. I'm going to go to a segment What do you see as the biggest challenge for leaders in our current technology environment?

Kate Carruthers (24:50)
⁓ I don't think enough of them are actually using AI.

I think more of them really need to be hands on using AI and using different kinds of AI for different purposes. you know, they need to understand the difference between the different models. They need to be putting the same question into several different ones to see how they perform so they can understand the differences. There's this, you know, we've just used AI, artificial intelligence as the word that we use, but AI encompasses a

whole lot of technology that includes things like machine learning, generative AI, large language models, large hierarchical models, you know, so there's a huge range of things that are in that and now there's agentic models. So leaders need to come to groups with these and start to understand what the different products, commercial products do and can offer. And I think they really need to get hands on with it.

and they can't hide from it and I don't see a lot of them doing this and it's something that I always encourage people to do.

pay for a couple if you can afford it and just start using them. And the other one that I do, and I tell people not to do this on their work computers, but use some of the Chinese ones because they're quite different and they're really quite performant too. So things like Kimi 2, DeepSeek and Manus AI. But don't use them your work computers, people, please.

there are a number of state actors that we know are very active in the world who run active attacks.

The Chinese are among them and they are building these really amazing models and most of them they're open sourcing so you can look at the code. But the thing that some of them do is they do tend to phone home and you probably don't want to install them on your work devices. And the thing that friends that I have that do cyber stuff quite seriously, they will typically install them on a

a container that's very well protected so you know think about protecting your information assets and if you are a cyber professional don't do this lightly and I would seek guidance if you're going to fool around with some of the Chinese models they're really good but just if you're a cyber professional seek advice

Dr Miah Hammond-Errey (27:16)
how do we help leaders to understand the inherent flaws and constraints that some of these capabilities have? how do we help them understand the risks, particularly for those people that don't have a technical background?

Kate Carruthers (27:33)
think the first thing to do is if your risk people at work don't have an AI risk register you need one so you need to you can ask Generative AI to give you a list of risks

But every organisation needs an AI risk register now and it needs to be specific to their own context and increasingly organisations like banks. Banks are really innovators in this space and they're using agentic models really strongly in typically in fraud detection and customer service and for our product applications. So increasing

Interestingly we've got unsupervised agentic models that are running without human oversight, like there's human oversight at the top but nobody can sort of dip into that and work out what's happening where. And what this is one of the key problems we've got with AI is essentially it's a probabilistic technology that's in a deterministic world. So to work in business we want it to give the right answer and want to give the right answer every time.

quite the case with generative LLM sort of approaches because they're probabilistic. So people that's the first thing that I always tell people in my classes that like this is the first thing you need to know. They're not going to give you the exact answer every time. They may give roughly the same answer based on different inputs but you can and this is why you need to start to use these things to see that they do.

one of the things that I tell people is if somebody's trying to pitch a business case to you about AI, you need to ask them what they mean by AI. Like what kind of AI is it? Is it a machine learning? Is it deep learning? Is it a neural network? Is it a large language model? Is it a large hierarchical model? What kind is it? And then when they tell you that... Yep. Yep. Yep.

Dr Miah Hammond-Errey (29:26)
Yeah. And if the leader doesn't understand that, they need to say, well, what does that mean? What does that do? Where does this data

come from? Yeah.

Kate Carruthers (29:34)
Yep.

And the other thing that, and going back to that question, what would I tell leaders? We've only talked about how organisations can use AI. The bad actors are using AI too. So there have been a number of recent ⁓ attacks, mostly by state actors, that they've used AI. And I did a talk at a conference last year where I did open source research

on a target and within 10 minutes I got information about their network, a list of their key stakeholders in the organisation, a network diagram with affinity modelling of who was who in the zoo, so of their leadership who was clustered together, who our contact details and

and potential targets like that was just using I didn't or wasn't even logged in that's what I got but

Dr Miah Hammond-Errey (30:30)
Yeah, yeah, I think.

I think people are really unaware how exposed we are, both personally and from an organizational perspective, that the volume of data out there is so great. And what it reveals about us,

We're very heavily exposed.

Kate Carruthers (30:44)
Well, no, but this research

actually even pulled in showdown data, so it was telling me stuff about their network. So it was literally telling me stuff about their network in real time. And that would have taken weeks, that would have taken someone's weeks to patch together in, and now you can ask chat GPT and it'll give it to you in like a minute.

Dr Miah Hammond-Errey (31:05)
You have invested really heavily in community organisations in your field. What is what have you gained out of giving so much back to the community?

Kate Carruthers (31:15)
The thing that I've always done is as a woman in a technology field, I was often the only woman in the room and I didn't want to be the only woman in the room. So it seemed like a good idea to help others become people in the room too. So that's been sort one of the things that drove me.

So that's what's driven me. And then there's been a growing consciousness over the last decades for me that we need to be much more inclusive and we need to be more diverse because that table that I was surrounded by, we were all white people. So, you know, that was the other thing that needed to change. So I get a lot out of it. I've also made good friends from participating in this. So I've got some good relationships and wide network.

and but the main thing is I'm very rarely the only woman in the room nowadays

Dr Miah Hammond-Errey (32:09)
What are some of the interdependencies and vulnerabilities of data, technology and security that you wish were better understood?

Kate Carruthers (32:15)
So people are not realising that data underpins everything in the modern world.

And the biggest risk now, especially with AI, it has always been the case. If you can poison the underlying data, you poison the entire data supply chain. And that's why going back to something you mentioned really early in our conversation was the CIA triad. So confidentiality, integrity, availability of those three, the one that I am the most concerned about is integrity. And that is because if somebody can

go in and change the data they can change your destiny they can potentially take your life like it data drives everything now my car if you've got a pacemaker it's everything it's everything in our lives now and it is our lifeblood and if they don't understand that nexus between data

and integrity that is the fundamental thing and that is one of the primary ways that bad actors are trying to attack using AI. That's one of their key ways is to poison the underlying data and very few organisations are addressing encryption quite seriously and things like that and I've even had conversations with people who saying we're not going to encrypt our data because it slows it down and I'm like then you need to differentially

your data in the databases because it is at risk and this is particularly true for government. So government really need to take account of this. They are so focused often on protecting the data but they don't understand the real vulnerabilities to attack.

Dr Miah Hammond-Errey (33:55)
Yeah, that is such critical advice. it is so critical that we actively decide what level of protections data in our organisation requires and then protect it accordingly. I really thank you for bringing that one up. It's so important.

I want to go to a segment on alliances. How can we connect organisations and people better in Australia to create new value streams in the digital world?

Kate Carruthers (34:19)
value streams is an interesting way to think of it.

because people typically don't think of value streams and aren't thinking of how they can hook into them. So I think it's a really valuable question, but I don't think it's in the consciousness of many people.

so i'm thinking that maybe what we need to do is and this is particularly for small to medium enterprise who we kind of started the conversation talking a bit about of like they're the they're the most unprotected of all of

So really, I think that maybe we need to start to look to build out into things like the local chambers of commerce or something.

to reach out to that community because they seem to me to be the ones who are the most underserviced and the most under resourced.

Dr Miah Hammond-Errey (35:05)
What relationships do you think will be most important in relation to AI and technology in the next year?

Kate Carruthers (35:12)
the relationships internally inside organisations, because you can't do cyber security in a vacuum, it takes a village.

cyber security is protecting against external threats and information security is about protecting the CIA of the organisation so really thinking about all of that working together because if you don't have all of those pieces working together and your information and your cyber security working together your organisation will be at risk so if you're not having a holistic view that's that's probably the most important thing and then for organisations that

aren't big enough to have all of those people, really think about who you can talk to to find out what you can do. there's some good advice that I can share with you for the show notes about cyber security for small business.

Dr Miah Hammond-Errey (35:59)
What's a new cooperation, competition or conflict you see coming in the next year?

Kate Carruthers (36:05)
Well I think the big conflict is obviously US-China and that's actually probably going to have implications that we haven't quite thought about yet because of

things like all the vehicles that China's producing now that everybody's buying. once the Americans want us to stop doing that, are we really going to? And we've already bought so many. So I think that's an interesting area of conflict that's going to geopolitics will start to seep down into the vehicle world, for example. that's just that. And then there's...

Dr Miah Hammond-Errey (36:39)
and presumably

IoT as well. ⁓

Kate Carruthers (36:42)
Yeah, yeah. And then there's all the state actors and then the non-state actors that are...

carrying out geopolitics in the cyber realm. I don't see any diminution in that. think that's going to continue and it will probably even heat up.

Dr Miah Hammond-Errey (37:02)
I mean, one interesting component of this too, though, is the terms of service when we've got such critical dependency on only a handful of key providers, obviously, if they decide not to continue to provide services, that leaves a real critical concern for governments and for organisations and that places the concentration of power in a very small number of hands.

Kate Carruthers (37:27)
Well, it's one of the things that leads me to think that Australia needs to start to explore sovereign AI capability with our own models on our own country that we can rely on should something mean all of these others go away. And we've had the cloud flare outage recently. We had the Amazon outage not too long back and then the crowd strike outage. So you can see everything fall over pretty quickly.

with those big players but if we as a nation are relying on only foreigners so what happens to Australia if Donald Trump doesn't like us so if he's having a fight with Albo and says cut off their access to cloud services to AI services and they comply what do we what do we do

Dr Miah Hammond-Errey (38:18)
and I mean, the International Criminal Court, has had that very real experience being cut off from Microsoft. So it is absolutely a part of that possible future. ⁓

Kate Carruthers (38:26)
I

And that's

what made it, as soon as that happened, that immediately went on my risk register.

Dr Miah Hammond-Errey (38:35)
Do you think organisations are taking it seriously enough or it's one of those too hard, too big questions?

Kate Carruthers (38:41)
Well the thing that we need to do is take precautions. you know at AICD we're an Azure shop, we're in Azure. And Microsoft have been trying to talk me into buying Fabric which is another one of their products. But I've chosen to stay on Databricks because Databricks is open source and if...

If anything happens, can fork that code and put it on a server that I own and still run.

Dr Miah Hammond-Errey (39:10)
⁓ I want to go to a sec.

Kate Carruthers (39:10)
Well, know, a lot of people are just

going, it's too hard. It's too hard. It's very hard. ⁓ But you know, You need to have an awareness of the potential risks and what likelihood and impact they'll have.

Dr Miah Hammond-Errey (39:22)
It's amazing how dependent and critical technologies become when we just use them every day. And when we don't necessarily think through those implications really seriously.

⁓ Coming up is eyes and ears. What have you been reading, listening to or watching lately that might be of interest to our audience?

Kate Carruthers (39:40)
Well, I've just started reading this, the coming wave by Mustafa Suleyman, who's the CEO of Microsoft AI. And I've just finished reading Karen Ho's Empire of AI. So I'm thinking about all of the implications of AI in our lives that from

what the technology is going to do, what it's doing to our privacy and what it's doing potentially to our jobs, our economies and our future.

Dr Miah Hammond-Errey (40:10)
I've got a segment called disconnect, how do you wind down and disconnect?

Kate Carruthers (40:14)
Well, I actually realised last year when I took some time off work that I don't have any hobbies. So I walk my dog and I do weights and stuff. so researching AI is my hobby and that's how I wind down.

Dr Miah Hammond-Errey (40:29)
Wait, did you just turn your work into a hobby? Like, is that a flex? ⁓

Kate Carruthers (40:33)
Well I left UNSW about this time last year and I took some time off and I was like, maybe I don't need to work anymore, maybe I just do some consulting and stuff and then I was still doing all of the kind of research for AI and stuff and I like, I don't have any hobbies, this is my hobby, I might as well get paid for this so I got a job.

Dr Miah Hammond-Errey (40:53)
My final segment is Need to Know, is there anything I didn't ask that would have been great to cover?

Kate Carruthers (40:58)
People need to understand that

bad cyber actors are actively using AI to drive their exploits now. So it has taken the bar that was only a little bit high and made it really super low. So the same way that we're able to augment our own workforces, they've now used AI to augment their workforces. So bad actors used to have to farm jobs out to script kiddies for money to help them scale.

now they don't need that they've got they just use their own agentic bots so the scale of the threats that are coming our way

going to go up exponentially and we need to be ready for that and I don't think a lot of people have got their head around this and we need to be ready for it and things like like ransomware is still a real threat you used to need people to launch ransomware they they run call centers and stuff but this is going to be super easy to do now you'll just launch your bots

Dr Miah Hammond-Errey (41:58)
when this exact thing that you have described happens, we flip the script on offensive the equation that we've relied on being, we only have to focus on the things that we know our adversary is most focused on, changes radically, because you can't defend against everything. And yet, that is essentially where we're gonna be.

Kate Carruthers (42:18)
And this is why I keep saying to people we need to think about our data protection from at a really granular level. Do we need to protect every single bit of data we've ever gotten a database plus all our unstructured data? It's actually really expensive and hard to do that.

Do we need to protect a single item of data like our tax fall number, a date of birth and you know those things? Do we encrypt those and make sure they're securely secure? So we need to maybe rethink how we approach this.

Dr Miah Hammond-Errey (42:46)
So if you were telling someone right now, what are the top three things each leader can do to help remediate that?

Kate Carruthers (42:55)
So the first thing, like every organisation, just do the essential eight. Do it. Know what it is. Know what it is and get it done.

Dr Miah Hammond-Errey (43:03)
I'm gonna put it in the show notes. If you don't know

what the essential eight are, you will after this episode.

Kate Carruthers (43:07)
Please

do it, please do it. So the first thing leaders need to know is that your perimeter has shifted. The perimeter used to be the firewalls around your network. Now the perimeter is wherever your humans are logging in. So you need to defend that. And that's the big shift.

The second thing you need to know is people are your weakest link. They always have been, they always will be. So how can we improve their performance?

And the third thing is we need to lift the level of cyber data and AI literacy for our management people so that they can't be let to just throw up their hands and go, it's all this technology stuff. I don't understand it. It's the modern world. They need to get their heads across it, and they need to understand where the threats are coming from.

Dr Miah Hammond-Errey (43:52)
Excellent. that's exactly what people need to take away. Kate, thank you so much for joining me today. It's been such a pleasure interviewing you.

Kate Carruthers (43:59)
Thank you so much for having me. I've really enjoyed our chat.