Technology and Security

On this episode of Technology & Security, Dr Miah Hammond-Errey is joined by Aurélie Jacquet, Chair of Australia's ISO AI Standards Committee, OECD AI expert, and advisor to some of the world's most influential organisations. Deploying AI responsibly takes far more than a good policy and this episode examines what responsible implementation actually demands. This discussion draws on lessons from capital markets, privacy law, international standards work and fortune 500 companies. 

Aurélie brings rare breadth to questions that matter; how organisations can move from AI ethics commitments to genuine controls, why scaling without governance is scaling risk, and what the AI conversation Australia will regret not having had today. Aurélie Jacquet is the CEO of Ethical AI Consulting, Chair of Australia's ISO AI Standards Committee and an OECD AI expert.

What is Technology and Security?

Technology and Security (TS) explores the intersections of emerging technologies and security. It is hosted by Dr Miah Hammond-Errey. Each month, experts in technology and security join Miah to discuss pressing issues, policy debates, international developments, and share leadership and career advice. https://miahhe.com/about-ts | https://stratfutures.com

Miah Hammond-Errey (00:03.752)
My guest today is Aurelie Jacquet. Thanks for joining me, Aurelie.

Aurelie Jacquet (00:08.334)
It's absolute pleasure to be on the podcast Miah. Thank you for inviting me.

Miah Hammond-Errey (00:12.712)
Aurelie is a leading global expert in AI ethics and governance. She specializes in standards and the implementation of responsible and safe AI. She's currently the CEO of Ethical AI Consulting, where she advises organisations, including Fortune 500 and ASX 20 companies on the safe and responsible implementation of AI technologies. She's advised in Australia on globally on pretty much every AI committee that matters, too many to list them all.

For example, she's an OECD AI expert, Chair of Australia's ISO AI Standards Committee, and advisors New South Wales and the Australian Federal Government. She speaks globally and has won countless awards and been recognized extensively for her leadership in responsible AI. I'm so glad to have you on the podcast, Arale.

Aurelie Jacquet (00:59.342)
Thank you Miah, that's the best summary. Much better than Chatty Petey could ever do, so really, really appreciate it.

Miah Hammond-Errey (01:05.226)
Brilliant. We're coming to you today from the lands of the Gadigal people. I pay my respects to elders past, present and emerging and acknowledge their continuing connection to land, and community.

Miah Hammond-Errey (01:19.122)
As an advisor to top Fortune 500 and ASX companies, do you have any insights into the trends when it comes to actual productivity gains?

Aurelie Jacquet (01:31.116)
Yes. Before I jump in, I'll also acknowledge the elders past, present and future and say that we have some amazing leaders from this community into AI and just wanted to say thank you to all of them for the great work that they do in Australia, in New Zealand and everywhere around the world. Okay, leaders and productivity on AI. AI is a tool, so if you don't know how to use it, you will not be productive.

If you know how to use it cleverly, this is not changing. It's always the same challenge. You need to be very mindful and understand how it can work for you. So we see it used well and getting gains when you use it for fraud, for example, or for specific supply chain or processes or excellent results also for monitoring

devices like a railway or train and the reason for that is one you've got really good data that doesn't change all the time and then you have very clear process what AI is really bad at is when there's no pattern and that goes for you mentioned in the AI should be precise

When we say that, that goes for all AI because whether it started when I started in AI, we were looking pre-GPTs and they were quite bad if you didn't have enough of the data and if you didn't know the process. So these supplies across, you can bring it to GenAI and agent TK and wait for it for Frontier AI. So the upshot of this is you need really good data.

you need to go have really good process, then you can actually make sense of them and optimize them with AI. That's what's really key. If you look at emotional recognition when it's been banned, it's because go and try to predict people's emotion. Everyone's different. Everyone has different way to do it. So this is a terrible use of AI. And that's why it's been banned pretty much.

Aurelie Jacquet (03:52.993)
in many countries.

Miah Hammond-Errey (03:54.718)
So can you share some of the best practices that top companies are thinking about and they're implementing? So you mentioned like monitoring, you know, things like railways or supply chains and presumably, you know, cybersecurity breaches and financial transactions and so on. What are some of those best practices that companies you've worked with have focused on?

Aurelie Jacquet (04:16.344)
So great you ask. I've actually, very recently I worked with two big company on the best practice. AWS published a responsible AI lens, which shows actually identifies the specific best practice that a team like a product team with a builders should take when building an AI system. And what you see, it starts from a use case.

You can't just build AI from without having a specific use case in mind. And it's not any use case. You need to understand whether this is the appropriate use case by effectively looking at what's a legal requirement, like three quarter of the incidents that we see when using AI. It's because they didn't understand the requirement of the product or the services that they were developing. So that's absolutely key.

to things, then what data do you have? Can you enhance the data? Can you acquire the data that's necessary for that use case? And third, really do people need that service? Will that help them? that work in their workflow? We see in hospital that we deliver more system one after the other and they're just not adopted because it's just one of a bad design after the other.

So this practice, like this is a, I'm picking on examples. So with the AWS Responsible AI Lens, you'll see the first one is that definitely define your use case. And that's what I see time and time again where it's failed. One of the piece where it's failing. It's just poor choice in use case going for something that's...

where you will not meet the requirement with AI or you don't have the data or it will not fill the need. that's again, an old problem, but that's very pertinent to any of those AI system. Then I also help, been published also this time this year, it was a CBA transparency report where CBA explain, so their best practice.

Aurelie Jacquet (06:39.758)
What was great about this report is it's actually showing what a leading company in Australia is doing with AI. You know, I've been super passionate about making sure that Australian have a voice globally and show what they do because I get the question all the time. Oh, how are we behind? We just don't know yet. There's actually people here that are doing amazing work. So while the report, you can't

share everything and anything. It's like still the compliance. Otherwise it'll be a security issue for the company. The report actually show organisation what it means to do AI and GenAI well. Again, back to your question, from my perspective, what I see is there's not that distinction. It's a small adaptation you need to do each time you move from AI to GenAI to agentic AI. It's not like a constant new...

things that's coming where we need to learn everything from scratch. It's actually the same problem with some variation that we need to think about. And so you see with CBA, they actually show that in their report, they've been in the conversation from the day go and it's incremental change that they need to make.

for to adapt to the latest technology. And that's a faster way to adopt the technology. If you say, I'll wait for to see what regulations are coming. I'll wait for the latest technology. It's going to be such a hard time to catch up. what's important with CBA is you see they give you a good example of where it works.

And you don't need to start with groundbreaking application first. You need to start with what you know best because You know where the problems are and You know where are the benefits? So that's why for CBA like you had a good fraud example or like we've been experimenting with chatbots So again, this is where the learning happens

Aurelie Jacquet (09:01.238)
and effectively that's why they say their policy is iterative. You mentioned monitoring, I think no one should deploy AI system if they're not monitoring it.

Miah Hammond-Errey (09:11.947)
Can you give us a little bit more about that? mean, obviously that Commonwealth Bank report, provides, it kind of shows that AI has been considered across the whole life cycle of their operations. And that's what's, I think, probably quite groundbreaking about it. If they've not just added it on, but they've actually thought about it through that process. And when we think about...

monitoring things, and a critical element of that is being able to evaluate how successful something is and then feed that lesson back into the system. Why did you add that statement? What have you been seeing?

Aurelie Jacquet (09:48.141)
I'll give you some long story, try to make it as short as possible.

I come from capital markets where I was managing as a compliance function algorithmic traders. I the opportunity to do that for GFC algorithmic trader could be the beginning of AI. What happened there is if you don't monitor your

the way your algorithm evolve on the market and you let it do well, well, I only let you go back into the past and figure out what happens. So, and that's how I went to AI. AI, I actually shaped international best practice. I work with ISO, the Australian head of delegation. So what we did there, actually...

With Australia, we shaped international best practice. There's now, I'll try to be as less technical as possible. There's one standard that we have that tells you what are the key controls that you need to have for your AI system. And while in financial services, because we had model for markets and we were already using model, we know that you need to monitor those models.

For most organisations, this is brand new. You released a product and they were just, you get customer feedback, but this is not longer working when the change with again, AI or with AI like the very beginning, the performance was dropping. So you needed to know why and when the performance was dropping and that's why you were monitoring. So you know you had to retrain. Now.

Aurelie Jacquet (11:48.735)
Obviously, it's retrained with Chia GPT, it's retrained or Chia GPT or any of the Generative AI model. You see they retrain regularly, so they adapt, but you need to see that they're again, not adapting the way you want or like each time there's a change that it's still performing as they should. They have a great advantage. They're much more dynamic so they can adapt, but equally

because they are adapt you need to manage them accordingly. So that's why I say if there's one thing you need to do, it's monitor, monitor, monitor. Also, it will help you with the incidents because with AI, what, again, any of those product, what they do is they automate, right? So if you automate something, makes you make it at scale. If you, so if it's at scale, there's incidents at scale. So

Back to my GFC comparison, you need in that world, we had what's called a red button. prefer not to call it a red button, but what we need is a fail safe. So effectively you have a way to manage the algorithm and replace it by something that's safer. So that's what needs to happen. And that's why monitoring is absolutely key. So

Why am I saying monitoring ISCE? Because it's a lesson from the past that's applied very much to all the AI system. And we've seen there's some absolutely great work that I've helped out with OECD on. And there's also standards coming in on incident management for AI system.

Miah Hammond-Errey (13:34.122)
can't help but think about algorithms going wild. Going rogue. I want to pivot. Yeah.

Aurelie Jacquet (13:40.969)
but on that, actually I'll pick you up on that. you know, that's something we saw in markets, right? What happened is with algorithm, you had different algorithms that were playing each other, but in markets it's traceable.

So you can see, so now when we have agents, this is the same principle. It's not different from what we saw in capital market. It's just you have agents that will push each other. Like in the markets, if you decide to have an agent that buy an amount of X at this price, the other agent that you do not see will push the price and suddenly you will have increase, you'll increase your price of your favorite tennis shoes.

so what happened in market is what's happening with AI and with AI agents. So that's predictable. The challenge is none of this is visible in real life.

Miah Hammond-Errey (14:41.2)
Yeah, especially when you're talking about things like dynamic pricing or situations that you as a consumer can't see the process, the processes happening.

Aurelie Jacquet (14:52.717)
So in capital market, like it's all digitised. Like when they were doing trade manually, was harder to do. Now it's all digitised. So you could see who's pushing what. We have some clear rules about what you should do and should not do. And if you cannot justify why your algorithm in markets did this trade, let's say you're going to have a few questions from the regulators. So that's markets.

Now, let's move to the AI world. There's some strong similarities here and that's why you see globally what's happening is a focus on transparency and audit. It's because why do we ask for register? It's we need to know what are the AI system, what they're doing, how they're interacting with each other. I don't like prediction, but say if you have to think about history repeating,

It says that in order to manage all the bots and make sure there's no illegal activity, you would just have a way to record the interaction and actually justify the interaction. And that's where the register and those push for transparency and regulation are coming through right, left and center.

Miah Hammond-Errey (16:13.95)
So do you think algorithmic transparency then is important in regulation or there's a lot of nuance? Like, because some aspects of that are gonna be IP protected and some are gonna be, you know, shared with some people like regulators. But then in things like social media, you can also think, well, maybe people want a certain level of algorithmic transparency over how their feed might be curated or how they might be shown things. So there's different situations might require different kind of elements of transparency.

Aurelie Jacquet (16:41.613)
100%. We had this again in the past. No algo traders will give you ever the underlining model that they built. was one of the exchange tried that way back was never successful. It's good reason for that. So you never gonna be able to do that here. However, when it comes to transparency,

There's two types of transparency is like you see we have data card, model cards. These are useful registers of the capability what they can and cannot do. It's very likely the users because the when I say the users you have those what we call big tech, but we also have the users that use the AI for specific application. They will have to provide a level of transparency to see what they do.

with those bots in order to understand any change in our market. So that's one side of the transparency for the regulator to understand what's happening and to be able to pick up on the activities that breaches the law. The different piece is transparency is very specific. I completely agree with you.

I will never advocate to have a one size fit all. I've done lots of work on transparency. And again, it depends on the use case. There's, if you too transparent on some users in some circumstances, you can create a lot more chaos than you would. And we saw that already in privacy. I used to be part of a function of a, the privacy function of a large organisation. They use design thinking right now.

for intervention and telling people how effectively an incident happened. again, in privacy, we had a lot on transparency. still remember, again, lesson learned from the past. The ICO at one point did a guidance on transparency and explainability about how to use the data before AI.

Aurelie Jacquet (19:00.75)
And it was, you had four or five different ways to do it for different type of people. was practically, as an organisation, very hard to implement. Right? So what we've learned now from ZoSpace is that when you have to inform people about AI and how it

It does, we've learned at least from the privacy notice, we should definitely not do privacy notice with a long text. That's just actually, again, I can say that in my mind is not helping nor the customer nor the company. What we do is being able to articulate the benefits and also being able to articulate like the harm that can happen. So that's the first step.

And you start to see some... some... some... some... I'll start again, sorry.

But so when you do transparency and you need to think about the benefit and the harms to the people, that's always back to those question. What's the best way to communicate? What's the best intervention, et cetera. And that's a very complex question that is very much depending on the use case within which you're building the system.

Miah Hammond-Errey (20:27.284)
Thank you. I want to pivot and ask a couple of questions about boards and kind of oversight. How technically literate do boards need to be now?

Miah Hammond-Errey (20:42.866)
Another way of putting it might be, what does meaningful AI oversight require from directors?

Aurelie Jacquet (20:53.58)
I think, I'm biased. I think that-

For the lawyers, like I used to be a litigator, we used to get very quick into any matter and being experts at any matter that had been thrown at us, whether it was a case about learning about all aviation rules or I remember one case was about the Borscht el Harab and construction, how you actually had to construct the tower.

All these are very vastly different. The board, they really need to, right now, be able to have that litigation mind and ask the right question. That's absolutely key. We've done a lot of seminar with senior officials at governments and with, I've talked to a few boards many times.

I think first is a reassurance message that no one knows a business better than they do. So they know what use case are working, what challenge they have. That's one. So that's something that should reassure them when AI you put a tool in there, right? However, this tool is going everywhere right now. So, and that's the challenge.

because it's proliferating everywhere, everyone's trying, there's a big push for AI. For directors and board is how do I keep on top of this? How do I make sure that the use are appropriate for my company's strategy? And I guess it's back to...

Aurelie Jacquet (22:53.658)
One, you need to understand the basics of the technology. It's probably good that you have a specific person on the board that helps with the responsible AIPs or the AIPs altogether because they can keep you abreast of all those changes faster.

Aurelie Jacquet (23:20.138)
And it's, as I say, this are not, it's not been like a new thing. Now there's enough evidence for like people on the ground that I've been in, in there on AI and responsible AI. They know that it's not brand new anymore, that they are pieces you absolutely need to have in place. so long story short, you do need a certain technical level of expertise.

But you need to think like a good director, like the way you do as usual, and be able to ask the right question. Do not be scared of the technology. So the hardest thing in the AI world is the hype, the nudges, the being fed different pieces. The ecosystem is very fragmented, very noisy. So...

It's being able to find the right advisor or the right person on the board that know who to trust or what are the source to trust first. And then being keeping the good knowledge, understanding that, you know, when we had computers coming in, right?

Miah Hammond-Errey (24:25.738)
Yes.

Aurelie Jacquet (24:38.54)
You still need to understand about how a laptop works. Like you just can't ignore it. It's the same for agentic AI It'll go and feed and it'll automate some task. So you'll need to understand how this work and ask the right question.

Miah Hammond-Errey (24:54.858)
Do think those risk boards are underestimating systemic risk dimensions because AI is being framed primarily as an opportunity?

Aurelie Jacquet (25:07.424)
I think I'd like to think that 2026 is when governance is pretty much the new black. Because as you say, we have like, there's been so much push for AI, you have to do AI adoption, scale, scale, scale. I think we see article every so often just saying, I think one of the consulting firms say you need if you don't

need use AI well you will be out. I think the real trouble right now is

You need to think about using AI, but if your governance function and your compliance function doesn't understand how to risk manage it, if you don't have the controls in place at that level, well, then good luck scaling it. Imagine just pretty much any financial service company, health company, doing their usual service or product, but...

not having upskilled any of the second line for any of it. That'd be pretty scary and good luck again.

Miah Hammond-Errey (26:28.341)
So I want to go to a segment. What are some of the interdependencies and vulnerabilities between AI and security or society that you wish were better understood?

Aurelie Jacquet (26:29.218)
with scaling?

Aurelie Jacquet (26:44.154)
yes, so... My...

The peeve about AI is effectively we're always trying to reinvent the wheel. it's always whether as I say for we started with ethical AI, responsible AI, trustworthy AI, 50 in AI. So each time we try to reinvent the wheel when it's not changing.

I was looking at IP incident, bias incident, reliability incident back in 2018. So that were already the challenge. And I think my

Miah Hammond-Errey (27:31.336)
I was going to ask you about all those terms, trustworthy, safe AI, responsible AI, ethical AI. I mean, they keep proliferating. Do you have a prediction for this year's?

Aurelie Jacquet (27:45.038)
I think

Miah Hammond-Errey (27:46.643)
Not a serious question. But I mean, I do want to know, like, is it just a reinvention of the wheel? Are we trying to incorporate new elements in there? Why are these terms, for people that are not in it every day, why are there all these different terms? Because there's some politics to it.

Aurelie Jacquet (28:03.49)
Definitely. So you have two questions. So I'll go with this one and then I'll go back to the other one. So Those term is changing as we see the evolution of the thinking and the learning. So ethics was like aspiration. We had goal, we had principles, right? Then responsible AI, we just started to see some incident. So it was a good way to say organisation.

No one wants to regulate the tech, but you need to be responsible. So start thinking about the incident and the other piece and some general guidance. And now we're going into safety because we see as effectively we embedding more and more the system into our day to day lives. Safety becomes primordial.

But they're they're all very interconnected. So, well, in our world, on the ground, when you implement, they're very interconnected. You need to have safety to effectively act responsibly. And, and you need where you can, you try to act ethically. So for example, for bias, right, you have

response, you need to be safe, make sure it doesn't hurt or harm anyone when you have data that's, don't, when you don't have, no one has a complete data set. So your data will never work perfectly. So what are the trade-off that you can make safely? And to the extent you can, you will enhance the data to make sure that it actually not just comply with the law, but

satisfy and help and provide, let's say, a result as equal as possible for all. So that's where you put safety responsibly and ethically together. And because you've put the right control into this AI system, you make it, you make the system trustworthy. That's how it all come together. My favorite representation of that

Aurelie Jacquet (30:26.255)
So we did, there was lots of pieces done on trust. Trust is impossible to manage. Everyone has a different way of trusting things. There's an excellent NIST report that got pulled down, but still you can access it about trustworthy AI and trust in AI and pretty much explaining that what we can do is we can make the right step to enable trust, but

we will not be able ever to get anyone to guarantee trust. And the UK has done also lots of good work on that, explaining that what we're after is justified trust. And that's why I am working in the field of, know, conformity, sorry, certification is a better word that everyone understands. So you certify that AI system.

have the right control in place and that's where you have, you know for sure and people can make their mind up whether they want to use it or

Your other question was, sorry, you'll have to, the first one you had, ah yes, the security piece. Actually, there's a lot of, what I wanted to say is we see lots of horizontal regulation and security is a perfect example of it. So horizontal regulation, like I like some aspect of the EU AI Act, but the fact that they come horizontal regulation makes it very difficult to implement when they're already

are the legislation in place and we see this friction happening right now. So for security is a perfect example where effectively at the very beginning when we were looking at just when AI was coming in the market, security professional were only looking at protecting the data. Now you need to protect the model. So it's always an

Aurelie Jacquet (32:35.318)
what we are doing with AI, we are effectively uplifting existing practice and improving them, so that they are fit for AI and security is a great way, is a great example to show that what we've also done, and learn from security is that, there's a standards that I'm going to call it's called ISO 27001.

Only the standards people know it, but pretty much every company, organisation, when you do procurement, they, and you want to, um, make sure that you're, uh, when you want to, um, effectively sell to that company or, um, organisation, uh, as part of their procurement, say, are you ISO 27,001 compliant? So it sets the standards for what?

security process you need to have in place and that's recognize international best practice. And guess what? What I'd love people to know, what I really really like them to realize is we've done that for AI. we have again that's standards ISO 402001 that we built and shaped so that effectively can be part of the procurement process and help set those best practice.

and the beauty, it actually works together with the cybersecurity because it has to be building blocks. We can't just put AI practices on top of in its silo and ignore everything else. So it really needs to build on top of privacy practices, security practices, risk management practices, compliance.

Miah Hammond-Errey (34:31.316)
Really in five years, what is the AI conversation we'll regret not having had today?

Aurelie Jacquet (34:38.447)
Let me think about this one. What's the AI conversation we will regret not having had today?

Aurelie Jacquet (34:55.95)
For me, for Australia, is we need to be part of the international ecosystem. That's always been my push. That's why I'm part of the OECD. That's why I'm part of ISO. So the conversation is we should have not a view about Australian regulation and

best practice and risk management. We need to look broader than that because this is the supply chain. We extremely, despite the friction that happened all around the world right now, we globalized data. We saw that with privacy. We see that for cybersecurity. We saw this with COVID. Australia is no longer an island at the bottom of the map.

It is very much technology brought us all way close together. And what that means is our regulation, our compliance, our best practice are intertwined. And that's where we need to be part of those international conversation. We need to have those international conversation in Australia.

and we cannot be a taker, we need to shape otherwise it's going to be very very challenging and you see I have to give it to the eSafety Commissioner, she is doing an amazing job getting those conversations internationally.

Miah Hammond-Errey (36:27.966)
The pod is a fan of Julie, she's been on the show, we love her too. So let's go to the contest spectrum. What is a cooperation, competition or conflict you see coming this year?

Aurelie Jacquet (36:40.306)
it's all three. so I talked about AI safety. So, you have what's called frontier models that are the top. So the frontier model piece, is going to come a lot more forward this year. We see already the EU is looking at frontier model early this year. frontier model are models that are extremely capable.

Some I've put a number to it. don't like to put a, some regulation I've put a number to it, but the challenge with that is technology keep evolving. So you can put the number, it'll grow up. So for those model, we're starting to think about safety, what it looks like. And I'm not talking about AGI. am personally not a believer in AGI.

So, but we're starting to think for those, there's research, lots of research done through the Safety Institute on what, how do you manage those risks and that's really great work. But then how do you reconcile it with the work that's done on the ground, day to day, with the people that do risk management.

And I think at the moment what we have here is, as you say, a little bit of a conflict between what's safety, it's different from risk management, from the research side. So you see the conflict between the two. I'd like to think they're going to be a cooperation. Well, there's a bit of, you say conflict, cooperation and collaboration. Was that the three? Yes.

Miah Hammond-Errey (38:21.906)
Yeah, I mean it could be any, but you've chosen all three.

Aurelie Jacquet (38:24.461)
Yes, so I think those two will have to come a bit closer together because if you see I give you an CSAM, which is unsafe child content that can be exploited is one issue for frontier models that they're looking at limiting exploitation of

children content. So that's a really important piece to do where there's lots of research happening and how do you bring that to risk management on the day to day so that when those models come in people in risk management can actually do the work and understand the research. So that's really important that those two will come together. We do see that in some of the international forums. It's starting a little bit.

And hopefully that's where we'll see ongoing progress on those items.

Miah Hammond-Errey (39:30.836)
know which direction to go. I've got so many questions. All right, I'm going to go to a new segment. It's called grounded. What should we stay focused on connected to and grounded in this year to keep sane amid the chaos?

Aurelie Jacquet (39:43.111)
That's not from me, but I like this. I think I would have to look for the person that made that quote. Sorry if I use yours, but it was so good. It's going to be... Everyone's competing for your attention, everything, every algorithm. So for me, it's two thing. And I haven't mastered them. Blocking, connectively.

Connectively blocking what's not of interest, being able to shift what's of no interest. And so to manage your workload, there's so many headline, everything is geared to nudge you or to grab your attention. That's the reward. Your reward is your attention. So you need to protect it because that's what keeps you sane.

Aurelie Jacquet (40:44.729)
And sorry, is this second? Can you repeat the question?

Miah Hammond-Errey (40:48.779)
It was just what should we stay focused on connected to or grounded in this year to keep sane.

Aurelie Jacquet (40:55.405)
and

Miah Hammond-Errey (40:57.802)
That's right. That was a great one. It's a great answer. So I want to go to some of the government and policy questions. What do you see as the best mechanisms that governments can use to inform themselves on tech and AI policy with being able to harness genuine expertise, whether that's technical or otherwise?

Aurelie Jacquet (41:19.407)
I think, look, I've done work for the European Commission way back. They gathered a group of experts to understand effectively how different policy we're shaping around the world on AI. Is that a while back? It was called the EU AI Outreach.

And it was me and my colleagues, we all brought different reports and different aspects to the commission. So I think that that was a valuable exercise to help the commission understand to my point how the world is shaping, not within just one country on AI.

And you see those patterns repeating again to actually bring experts that know deeply how to manage and have worked in AI for a long time. I think it's been helpful and even for the expert, it's been helpful working together because often we pinged against each other. So one, like you get an advice from somewhere and advice from others to have us in the room.

and to be able to, as a lawyer would say, advocate in a friendly manner and understand the difference in where everyone come forward, just move us to the conclusion a lot faster. This is something that's worked well with at the federal level when we we were, we had the 12 experts from the federal government, and this is also something that has worked well.

And he's working well with the New South Wales AI Assessment Committee that I'm part of too. So, and we bring people from, what makes it work well is we bring people from governments that are in the day to day, but from academia, it's always that Triumvirate that works well, and industry. So.

Aurelie Jacquet (43:33.342)
once we have all the voice to the table you can actually make implementable solution and understand the problem better.

Miah Hammond-Errey (43:42.886)
Where are some of Australia's regulatory incentives misaligned with economic security and social outcomes?

Aurelie Jacquet (43:54.958)
Can you build a bit more for that?

Miah Hammond-Errey (43:56.501)
So yeah, like what I'm really asking here is like, are there things right now where our regulatory frameworks are just not aligned super well with the economic security and social outcomes we're aiming for?

Are there ways we can optimize that, I guess?

Aurelie Jacquet (44:18.712)
So my learning from like there's always a way to optimize like every region is a bit different. What I see that works best is and that's what I've advocated in many forum is effectively old fashioned regulation does not work and yeah it would have seen this also in the cyber security world. If you have a prescriptive law

It's really hard to adapt to effectively the technologies that evolve. So I know not everyone like principles, but with privacy, like principle was very difficult to adopt because you didn't know where to go. So the upshot is when we're writing laws for today that technologies that evolve, maybe I'll give the good point to

Europe and saying that they have a framework that's built on principle but then they go with guidance. It's not always easy but at least you have those standards that support and that provide guidance and they've been doing that for a long time. It's not perfect but it allows for evolution because to change a text of law is really challenging.

In the financial service world, obviously, when we had again, like massive change, you saw the adaptation, we had the regulators that give direction and updated those direction as the technology matured. Again, not perfect, but it allowed for a variation. The challenge, the key challenge with AI,

Again, any of those systems affects every portfolio, every regulator. again, back from the past, when we did algorithmic trading, the hardest piece was to all the regulators very fast about what the algorithmic trading was doing. There was a report back in the UK that explained that was one of the key challenges.

Aurelie Jacquet (46:38.765)
I think with AI and that's why the experts are useful is how can you educate the regulator understand what the challenge and the difference with AI is a big education process so that they can actually make the best choice about changing the law. On specific aspect, I'll just also mention that

Lyria is always great when it comes to discrimination law. She's made some great paper and suggestion on what to do there. So I will absolutely defer to her on that aspect.

Miah Hammond-Errey (47:12.626)
Awesome. We love Lyria as well. It's just like a shout out to pod pod friends here. What are some of the most underappreciated strategic risks of AI for government right now in 2026?

Aurelie Jacquet (47:29.486)
Nudging and oversight.

There's always technology, when you use technology there's always

of a trust. It's always been like that. It's not new with AI. like I said, for me, I'm not definitely not trying to reinvent the wheel. But you see with AI, think there's many case where there were basic algorithms that were used, quoting Eliza, for example, as like that was a very old chatbot that was providing

advice from a psychology perspective, people were trusting it extremely. So you have that, an extreme trust in technology. I did say AI is just a tool, but maybe the one time I was wrong is when I say it's not going to change our society so much. NLP, so when you use word, we use word

convince people if when you that's the way we communicate so it's very powerful much more powerful than we think to have the right words and being able to place them can convince someone and get them to be the happiest person or the the most miserable person so again with this power of

Aurelie Jacquet (49:10.338)
being able now making everyone capable to choose very powerful word and to nudge people to push them one way or another without really understanding and having visibility on how the nudge are made.

Miah Hammond-Errey (49:26.431)
I'm really interested in this. I've written a lot on from a security lens on what I've called an information ecosystem of malign influence and interference. So foreign interference, but also this exact thing, you know, whether it's algorithmic control, nudging towards behavior. It is deeply concerning and yet it feels quite intangible to try and describe it. You know what I mean? Like you can talk about it and nudging and

So I'm really interested in hearing you say this, because I mean, I think it's challenging from a democratic point. I think it's challenging from an electoral point. But I also think it's challenging from a polarizing, inciting people to violence, foreign interference. Like this ecosystem is actually about influence. And, you know, to your point about attention, it's also about intention. So we used to have this focus on attention economy.

And now there's recent research, I think it was out of Cambridge, but it's talking about trying to monitor and nudge your intent. And this is where this influence or in your words, nudging come in. It's a really interesting space, risky.

Aurelie Jacquet (50:35.003)
And again, we were talking about this in 2018. The first one that came to me and asked about this back then were startups, because when they were building project or products with AI, they were just going, can I actually make this recommendation? Will it be safe? So from, you know, recommendation for

to eat something or not, like as part of a health app. There's many impacts that you can have on the person with that understanding. There's also, I've seen lots of cases where you see it, especially on non-for-profit, it comes in different ways, where because you're a person that liked to donate higher level, we've tracked that and we present you with that screen of higher donation.

nudging you towards a higher donation but no one's explaining that to you. That's what's happening in the background. Equally we saw for loan, for lending, that some people they really don't like to see five screen. They just want to see one. Others want to see ten. I'm probably on the ten.

Miah Hammond-Errey (51:53.95)
I don't think we're the best customers. They're not the most nudgeable.

Aurelie Jacquet (51:58.585)
But so all our nudges and we being nudged more and more before you had to fill a form, you had friction. Like the biggest friction I had was through, I know, I won't go there.

Miah Hammond-Errey (52:10.986)
So this is one thing I think is super important. I've got a couple of big presentations coming up about AI and cybersecurity. And I think one of the key challenges for organisations is that AI is just reducing that friction. It's reducing the friction for the individual to make an error, particularly in terms of cybersecurity, phishing, deepfakes, so on. But it's reducing friction across the board in our whole life.

Aurelie Jacquet (52:32.398)
and

Aurelie Jacquet (52:35.918)
And back to your point, then it makes it very, hard to make a decision and to challenge the algorithm. I always thought way back when I was getting into AI, I always had this in mind. So as a litigator, you...

you have to promote an advice, like produce an advice and say, look, you have 80 % chance of winning. And some of the pro bono legal centers, do that. But imagine if everyone's got an AI system that tells them, well, for this case, you've got 1 % chance. Everyone sees that you have 1 % chance of winning. What's the incentive of taking that case?

So in law, obviously you have an equity problem because then that's a challenge. But you see, how can you actually challenge the AI decision? And that's where the idea of

Oversight is a bit fraught because at the beginning I say, you need human oversight and it's not possible to have human oversight always like nuclear aviation. It's you need to be careful where you put and how you put human oversight. But if it's nudged.

Miah Hammond-Errey (54:02.548)
So you just, just, you kind of brought it back. I was going to do that. What, you know, that question was what are the most underappreciated strategic risks of AI for government? We talked about nudging and now you're bringing oversight back in. Did you want to kind of wrap that up? Cause I took you on a tangent.

Aurelie Jacquet (54:16.43)
Yes. So nudging plus oversight. How do you get good oversight from people and accountability when they're being nudged every day and when they don't know what they can trust or what they can trust? That's a big deal.

Miah Hammond-Errey (54:29.95)
And what can government do? Like, how can they start to think about that problem? Even acknowledging that we're being nudged in almost every digital aspect of our life.

Aurelie Jacquet (54:39.661)
I think there's some good effort that are underway. It's before we wear nudge in different way, this one's like AI is always a scale problem. So there's very good way you can do it. It's understanding what you use it, getting people to have time to think. So that's probably about the workforce and how that's a big issue and that's a separate issue.

thinking how you allow your workforce to detach itself and have some thinking time and challenging the observation that they see. that's a balance that needs to be done. There's no perfect answer yet. So I can't give you the one silver bullet. But what I see from

Miah Hammond-Errey (55:33.759)
Yeah.

Aurelie Jacquet (55:38.785)
all the errors, like I say, if all the lawyers are using the tools and it's proliferating in judgment, like all those AI mistakes, we see this in many of the professional service case, people need time to think and to be able to actually challenge the system. And they'll need to have some intervention that's built.

properly so that we can work together. So again, internationally, we're doing some work on human computer interface to understand how you can assess the reliability of the response, but also of the human and the interaction. So that's at the beginning. So again, I won't have the perfect answer for you.

Miah Hammond-Errey (56:33.438)
I want to go to a segment on alliances. You're obviously involved in many of the multilateral institutions and forums that kind of set standards or discuss these. What are some of the key alliances in AI and governance that we need to be focused on right now?

Miah Hammond-Errey (56:51.972)
Or would you prefer just to give us an overview of what's happening?

Aurelie Jacquet (56:56.226)
let me see... I was trying to think... no, I've got something about alliances. That's what I was trying to think. one interesting one that's coming and that I'm very happy about is insurance and responsible AI.

Miah Hammond-Errey (57:22.122)
Tell us more.

Aurelie Jacquet (57:22.223)
or.

So far, insurers have not provided specific insurance for AI at Silent at the moment. There's some noise that this may change. If it's changing because AI is bringing its sets of challenge and the insurers are looking...

at how to manage the risk, then you'll see an uptake like that you see some insurance companies that already insurers that are already looking at what are the frameworks that are helpful in managing those risks. the alliances is effectively

it looks, I'll be, if this alliance comes through, it looks like your responsible AI best practice could be rewarded by effectively having the right insurance in place and having the insurance.

if you have those practices in place.

Miah Hammond-Errey (58:41.407)
really interesting. I've got a couple of juicy geopolitical questions before we go through the last couple of segments. You know, geopolitical

politics is obviously particularly relevant to AI given that the infrastructure is largely kind of centralised in two major jurisdictions. And we operate in an environment of fairly persistent state-linked cyber activity and AI threats, not from necessarily from the same jurisdictions. But not every AI incident is geopolitical and not every vulnerability is strategic competition. How should governments...

and organisation interpret like tech infrastructure and state-based threats without over-securitizing everything.

Aurelie Jacquet (59:41.609)
the, the one piece I've got in mind for me, it's understanding, your strengths and what you want to protect. Right. So the one piece I've got in mind is the data assets. I guess this is, what's happening with agents. How do you protect as a company, your data assets, but as a country,

we in one of the organisation we were working, we were starting to look at data exchange and understanding having a record of Australian data set. And that's where our value is and what we need to protect effectively. So I can't speak for the whole stack or for, but if I have to think about

What we would need to protect from my perspective was come from the privacy world. So the our quality Australian data and we see that happening in Europe where effectively they look at creating European data assets or data exchange. So

that allows you to, for the contrary, to use the data but also to manage the access to that data. So I think that's where the value lies for us because just like any company or at the government level if you

give your data away and anyone can use it then you're losing a little bit or a lot. You could be losing quite a lot.

Miah Hammond-Errey (01:01:42.182)
As we see across that tech stack and particularly the infrastructure of AI and frontier models particularly.

there's a real risk that we could be walking into a more fragmented world. how does Australia, you know, because you started this conversation by saying, we have to acknowledge that we're a part of a global system. And at the same time, that global system is fragmenting, like, and how do we, how do we continue to engage with it in a way that, as you say, protects our data, that helps us engage in that economy, and yet at the same time, acknowledges that

Aurelie Jacquet (01:02:02.947)
Yes.

Miah Hammond-Errey (01:02:17.084)
some of those shifts have the capacity to impact us more significantly and strategically later on.

Aurelie Jacquet (01:02:23.475)
You build bridges, that's all what I'm about. I've never seen the world adopting one regulation ever. As a litigator, it's like always, always, you you have, you choose, like I was actually doing international arbitration, so you pick the best forum and you pick the regulation you wanted to have the case. So here,

what you want to do is everyone has different perspective. You need to be able to build a bridge between those different perspectives. So I'll give you an example. In Australia, we were the first one that adopted, again, I'll talk to the standards because that's the easy part. And they promote interoperability. interoperability. So we adopted it in Australia.

and we had a chat with Singapore and they were really interested in adopting it as a local standard. we had in February last year, we had a session between Singapore and Australia where they actually launched the local adoption of the ISO standard, which means they have different regulation than us.

But effectively both countries have ISO 402001 in common, so they agree on the key control that needs to be in place. So that's how we build bridges rather than just having creating just pure friction and making it impossible to implement the stack. Yet, I...

Miah Hammond-Errey (01:04:09.567)
I will say that's perhaps not that that's perhaps the problem of my domain and national security. We create a lot of friction.

Aurelie Jacquet (01:04:17.398)
Yes, and to that point again of adding something on top, we have to, with AI, you need to consider security. Like that's back to the existing law. You've got quite a heavy stack for some organisation to actually already play in. So on the cyber security, you've got many laws that applies across the world. So imagine for the like, AI's

cybersecurity is primary, then you have data, and then you have the AI regulation. So that's three big blocks to manage. And that's why I think from my perspective, it's absolutely key that we start building those bridges, or at least for AI, because I've been on the receiving end of looking at the different privacy laws and how they interact. I'm not going even at the state level.

And that makes it very hard to use the technology.

Miah Hammond-Errey (01:05:20.883)
Absolutely. All right. I know we're heading towards the end. So I'm just going to go through. got a couple of segments. I want to go to a segment called Emerging Tech for Emerging Leaders. What do you see as the biggest shifts for leaders and leadership from emerging technologies like AI?

Aurelie Jacquet (01:05:40.983)
So, for leaders, think the biggest shift is to understand that AI is breaking all the silos. So that's while AI play an existing process, you'll have to be a lot faster. You can't do the process. You need time to think, but you can't keep the process the way they are.

If a model is changing within its output within x amount of time, you can't have the rest of the enterprise not moving at the same speed. So that's one.

Aurelie Jacquet (01:06:30.606)
I'll go back to accountability and effectively managing the system. As there's more AI systems, there'll be more interdependencies. So there's a really importance to understand and have a map of those model and those interdependencies.

So again, it's change is the way we work, we'll have to have policies that applies to people, but that applies to people plus system. So, okay, sorry, start again. I'll just, I'll have to think about.

Miah Hammond-Errey (01:07:11.742)
people plus system. Okay, so we had speed and accountability. Don't forget those because they're really good if you're going to start again.

Aurelie Jacquet (01:07:18.452)
Okay, I had something speed

Miah Hammond-Errey (01:07:41.45)
Do you want the question again?

Aurelie Jacquet (01:07:41.914)
no, I'm just trying to see how I weave in. Cause there's something for the board that, you know, the minutes from board meetings, there's something that I'm, yeah. Right. okay. Yes. so speed and accountability, obviously accountability, you need.

Miah Hammond-Errey (01:07:55.144)
Yes, weave that in.

Aurelie Jacquet (01:08:09.966)
to understand what's the dependency of your AI system and what's the dependency of the AI system on other system and on the people. And that's something particularly important. And effectively the way you make decision, that's hard, again, back to what I was saying, taking that time to be able not to be nudge and make the decision.

And for board directors, think the one thing to keep in mind that's very topical at the moment, it's more very practical tips. There's a couple of court case that's happening in the US that pretty much are saying that the data that you have in those AI system is not privileged.

So it means that if you talk to at the board level about a case and you talk with your legal advisor at the board level about the court case, these conversations are privileged. If you feed them in one shape or form into an AI system, it's not or it's likely to be not if we were to follow the US decision.

So for boards, if they think about AI notes, they should be very careful about which data is embedded in the AI system. And that's a very, very good and important point for boards to think about their data assets and their data quality and any data leakage.

Miah Hammond-Errey (01:09:40.34)
So even something like an audio transcript.

Aurelie Jacquet (01:10:08.344)
that happen with AI. So the new business model with AI is again attention, but it's also data. So you have your own data. You want to keep that data. want to, that's great. And that's protected and confidential and sometimes privileged. So protect that. That's going to be more important, especially with agents. As we...

are fully into the data economy.

Miah Hammond-Errey (01:10:39.998)
That is an incredibly important risk for people everywhere to be aware of given the prevalence of, you know, audio to text transcripts, for example. I want to go to a segment called Let's Disconnect. How do you wind down an un-

Aurelie Jacquet (01:10:53.326)
It's been a little bit trickier lately to use that technique but I do ocean swim so I did the core classic last Sunday. While there was a shark alert I still was very... they didn't tell us so we could keep going and it was actually a very enjoyable swim.

It's a good way for me to be disconnected and rearrange my thoughts. Actually, I've got the best ideas while I'm swimming.

Miah Hammond-Errey (01:11:31.082)
Coming up is eyes and ears. What have you been reading, listening to or watching lately that might be of interest to my audience?

Aurelie Jacquet (01:11:39.15)
So, I was in Korea late last year, 5 December, we had a AI standard summit where we had quite a lot of good announcements and just as I was not adapting with the jet lag I was watching the Nvidia announcement about effectively what we call

Physical AI, again, not sure I like the term physical AI, but what it is, is, and that's exciting, it's having AI system helping with, the word escaped me, helping with.

Aurelie Jacquet (01:12:36.546)
I'm having a blank. Sorry. What is it? A digital twin. So what's really exciting is actually, they refer it as physical AI. I'm not super happy with the term, but because it's not that explained in a tree, what it is, is using AI to build a perfect digital twin.

Miah Hammond-Errey (01:12:38.097)
it to me.

Aurelie Jacquet (01:13:04.473)
So effectively helping with manufacturing and once you understand how all your process and you can map your process well, then AI can help you with that. But what I see next is imagine if you can map your process, your risk management and other process and being able, then you become in that exchange that was

discussing about you able to create and see the interaction. can train and test the interaction in a safe digital environment and see how it reacts. those digital twins that are powered for AI can also help AI understand what's the limits and how to manage it better. So that's what I've found super exciting.

Miah Hammond-Errey (01:13:58.888)
Yeah, cool. Okay, my final segment is Need to Know. Is there anything that I didn't ask that would have been great to cover today?

Aurelie Jacquet (01:14:14.552)
I'm just getting super hot, sorry. I'll have to try, it's, I'll have to close the window and give me two seconds.

Miah Hammond-Errey (01:14:20.444)
Okay, I know, I know what you mean.

Aurelie Jacquet (01:14:36.27)
Sorry. Something you didn't ask. Yes. When it come to responsible AI, it's great to have an AI policy. It's the first step, but it's not the last step. So if you have an AI policy in place, but if you do not have a set of best practice that is implemented at the control level, you should not be scaling AI system.

Miah Hammond-Errey (01:14:37.667)
good.

Aurelie Jacquet (01:15:04.746)
So make sure you move from policy to best practice implementation through the AI system lifecycle. Otherwise, you're up for scaling your risk and having some difficulty moving to the next step of your AI deployment.

Miah Hammond-Errey (01:15:28.382)
Aurelie thank you so much for joining me today. I feel like we could have kept talking for hours. It's been a real pleasure.

Aurelie Jacquet (01:15:33.698)
Yeah, always a pleasure. Thank you very much. Really enjoy the question.

Miah Hammond-Errey (01:15:39.093)
I have like a million more questions, but we both have.