Technology and Security (TS) explores the intersections of emerging technologies and security. It is hosted by Dr Miah Hammond-Errey. Each month, experts in technology and security join Miah to discuss pressing issues, policy debates, international developments, and share leadership and career advice. https://miahhe.com/about-ts | https://stratfutures.com
Technology & Security Season 3 Episode 7
Dr Miah Hammond-Errey (00:00) Welcome to Technology and Security. TS is a podcast exploring the intersections of emerging technologies and national security. I'm your host, Dr Mia Hammond-Errey.
My guest today is Professor Edward Santow. Ed is an industry professor for responsible technology and co-founding director at the Human Technology Institute at the University of Technology Sydney. He serves on numerous federal and state advisory boards, including the Australian government's AI expert group. He was the Australian Human Rights Commissioner for five years from 2016 to 2021. He is also well known for his career in legal research, public advocacy and private practice. He has received several leadership awards and is a fellow of the Australian Academy of Law. It's real pleasure to have you on the pod.
Prof Edward Santow (00:33)
So great to be here.
Dr Miah Hammond-Errey (00:34)
We're coming to you today from the lands of the Gadigal people. We pay our respects to elders past, present and emerging and acknowledge their continuing connection to land, and community. So I want to ask you first about Australia's AI posture. What's occupying your brain space about AI and Australia's AI position?
Prof Edward Santow (00:53)
Well, you since, I guess, the third week of January 2025, globally, there's been a kind of a new politics on AI that is being advanced, I guess, most obviously by the Trump administration. And I guess at the heart of that new politics on AI has been this critique, which is that we focus too much on the risks and threats associated with AI and not enough on the economic and broader opportunities of
of AI. regardless of whether you accept that critique or not, it's helpful to be really clear-eyed about what that means. It means that I think people are looking for and governments are looking for ways to kind of lean into the AI adoption opportunity space. And I think there's a new scepticism for regulation and anything that might be considered a brake
on AI adoption and AI development. I think what that means for us here in Australia is that we need to find our own space in that. So we need to re-articulate what is very important about a balanced approach to AI.
Dr Miah Hammond-Errey (01:57)
That is really interesting, because a few months ago, there was a global study released, as you would well know, which ranked Australia last out of 47 countries in believing that AI benefits outweigh the risks. And on the other hand, we're also referred to as early adopters. What do you think is happening here in the way that Australians see AI and what that might mean in the global context?
Prof Edward Santow (02:17)
Yeah, so by developed country standards, ⁓ Australia is right at the very bottom of the list when it comes to trust in AI. And there are, in fact, lots of different ⁓ research methodologies that all come to the same conclusion. I think what it says is that Australians have taken a of a dim view on where AI has gone wrong.
And I think Australians also feel, that perhaps we don't quite have ⁓ effective guardrails in place. Your second point is true though, Miah and that is that Australians tend to lead into innovation. we are often early adopters, and it's almost impossible to reconcile those two positions. But to my mind, I actually think they can be reconciled because I think Australians are just practical
They kind of go, well, yes, we see the opportunities, but we also see this problem, which is that ⁓ along with those opportunities, the risks haven't properly been addressed.
Dr Miah Hammond-Errey (03:17)
Do you think we're making enough headway in addressing those risks and concerns?
Prof Edward Santow (03:22)
Yeah, it's such a good question. So I think we're doing two things simultaneously to address this problem of community trust or mistrust in AI. And one of those things is terrible and useless. And the other one is quite good. So I think the thing that really doesn't work is where
people in positions of authority, be it in government or industry, or dare I say, academia, say, you just have to change your attitude, you know, just trust this damn thing, right? People don't do that. trust is something that you earn. You can't just demand that people trust you. And so I think,
we are seeing quite a lot of activity in that space where you sometimes effectively hear people who are in industry and really want to do something about that trust deficit effectively saying to government, hey, just smile more when you talk about AI, because that will make all the difference. We know it doesn't work.
What is the tried and true way of building trust? you
you first articulate why you want to use this technology. And to you say really clearly, here's what we think will be the benefits to you as a user or someone who's affected by AI. And for me, as the government or as a company that is actually putting that technology into the world. And then secondly, you do what people like Professor Stuart Russell have been urging us to do for a while now. ⁓
is you don't be arrogant about it. You don't say, well, you know, I'm super clever. I'm going to be the one person in the world who never makes mistakes. Instead, you go, look, I've made every effort to ensure that this piece of technology is foolproof, but I've assumed that I'm wrong. I've assumed that the technology might go wrong. And if it does, here's what you're going to be able to do to address any problems or any harm that you experience. And so if you
design an AI system and indeed if you design any product or any service on the basis that it might fail, then what you're going to do is you're to build a much more robust product or service. And that's the way, that combination I think is the way you actually build a firm foundation of trust rather than essentially just trying to market your way out of it.
Dr Miah Hammond-Errey (05:49)
That's a really interesting reflection. Do you think big tech is doing enough to earn the trust of Australians?
Prof Edward Santow (05:55)
That's a good question. And I don't think it's possible just to say, you know, we're just travelling in one direction because I certainly am seeing government ⁓ and the tech industry doing both the good stuff that I've just referred to, but also some of the bad stuff as well. I do think we need to get past this idea that all regulation in this space is either good or indeed all regulation is bad.
If you're simply a cheerleader on one side of contest, then I think you're doing a great disservice to the Australian community, but you're also never going to shift the dial, I think, on those things like community trust, which is so fundamental to increasing AI adoption. So I guess what I would like to see ⁓ tech industry do a bit better, and frankly, I'd like to see government do this a bit better as well.
Dr Miah Hammond-Errey (06:22)
Absolutely.
Prof Edward Santow (06:47)
is to kind of forcefully, this might sound like a paradox, but forcefully kind of advocate for the sensible middle ground that Australians occupy on pretty much every major issue of the day, which is that we want to take a balanced approach, we want to understand what the risks are, we want to understand what the opportunities are, we want to lean into the opportunities and we want to make sure that the risks are addressed.
rather than that kind of much more, kind of you have to pick a side approach, which I think is not rational, but also not helpful.
Dr Miah Hammond-Errey (07:21)
We are recording in June. So last month, we had a change of minister in industry, innovation and science. How do you think Tim Ayres will position Australia and what's really important for us to focus on?
Prof Edward Santow (07:32)
I'm very bad at predicting the future. got a terrible track record. So I'll base this on what he's said so far. I think that the new minister, Tim Ayres, has been very clear in saying that he wants to kind of foster good AI adoption. And I think that that's right. I think he's linked it to the agenda that the ⁓ re-elected Albanese government has been prosecuting hard on productivity. ⁓
And he's particularly emphasised the point that this is something that needs to be done kind of hand in hand with workers, with employees. I think that last point is so important. My colleagues, Nick Davis and Llewellyn Spink have been doing some really deep work with different employee groups over the last bit over a year. Also working with
Peter Lewis at essential research. And what that shows is that employees in Australia are actually very open to the use of artificial intelligence in their workplaces. What they object to is this idea that sometimes they feel like invisible bystanders. And that was the name of our report when this sort of new technology is introduced.
What they said to us more positively is actually when we can see there's a kind of a big technological transformation that is sort of starting to be rolled out, we can actually help make sure that that is a success. You know, the statistics on AI projects that fail are dismal, right? About 80 % of AI projects fail in the real world. I'm not talking about in the laboratory, I mean in the real world, that's real money.
And so when workers are saying, you know what, ⁓ we could see this was going to be a failure, but if they had done, if our supervisors had done X, Y Z, it would have really set it up, the project up for success. I think we should listen to them. So I'm really interested to see how Minister Ayres wants to kind of take forward that idea of not seeing AI necessarily as a kind of a
an implacable threat to employment, but one where if we get this transformation right, it can be really good for employees as well as for the broader economy.
Dr Miah Hammond-Errey (09:49)
I'll link it in the show notes. I also think it was very interesting because it continues to show how critical workers are to the success of, technology adoption in companies. And that's, people often forget when they talk about AI and, productivity that
workers are the mainstay of your value. And if you don't actually engage them in the process, you're really heading down the wrong path. As AI development continues, are there any things we can't afford to get wrong?
Prof Edward Santow (10:17)
It's a great question. The first thing I would say is
know, with any technological transformation, in a sense, I think you also need something that counterbalances whenever I've seen, you know, a really, really great project that relies heavily on AI, that fits in that 20 % of successful AI projects, one of the things I'm always struck by is that not only have they got the technology right, they've got the non-technology right as well.
So when you are, I guess, investing in, ⁓ you know, an AI project, often what you need to do is invest just as heavily in the thing that sits alongside that,
Dr Miah Hammond-Errey (10:57)
I'm going to go to a segment. It's called the contest spectrum. What's a new cooperation, competition or conflict you see coming this year?
Prof Edward Santow (11:05)
Look, I am worried ⁓ about the kind of heightened conflict that we're seeing between major powers. It's at least a two-way conflict and possibly a three-way conflict. The obvious conflict is between the United States on one side and China on the other. ⁓ You know, the US articulating ⁓ kind of a very muscular kind of set of policies in respect of
AI and a very muscular approach to tech companies that are based in the United States itself. I guess, seeing it in highly contested terms vis-a-vis China, if there's a third major player in that context, it's the European Union, where I guess they've articulated a vision for how
AI should be developed and used that is within clearer regulatory boundaries ⁓ than the United States. In some ways, the EU and China sort of share that in common. ⁓ China ⁓ has very strict regulation on technology and many, many other ⁓ areas. It may be that some of that regulation is ⁓
different from laws that we would develop in a liberal democracy, but it's not an unregulated space. we can see that kind of contestation continuing to build over the course of 2025. And we saw that right at the outset of the year at the AI Action Summit in Paris in February. And I kind of worry that it kind of suggests that there is
sort of one extreme view that we have to kind of subscribe to that is the best view. Whereas I actually think none of those three jurisdictions articulates, I think a really good position to adopt when it comes to AI policy and certainly not for Australia. think Australia tends to be much more in the pragmatic middle and I think that's where we should remain.
Dr Miah Hammond-Errey (13:08)
let's go to your book. it's called machines in our image, the need for human rights in the age of AI. And it was co published with Daniel Nellor.
It's now nine months old, which in AI terms is an eternity. You presented some utopian and dystopian futures. What are we getting right? And where can we do better?
Prof Edward Santow (13:26)
Yeah, so when I talk about my utopian and dystopian visions, I often say that I feel like I'm living in a fever dream because those visions kind of forced to coexist in ways that are really uncomfortable. But it is the reality, right? Like, you know, the things that excite me as a human rights lawyer are things ⁓ with AI, like, you know, the ability to...
use AI for people with disability to make the world more inclusive, to make it more accessible, for blind people to hold up a smartphone and for that smartphone to be able to tell that blind person, you know, everything around them, allowing them to engage much more autonomously. That is super exciting. And at the same time, the same technology is being used in various places around the world to perpetrate terrible repression.
And we can see with facial recognition technology when it is misused that that can cause terrible, terrible harm. It's really hard for me to kind of weigh up those two things. Like are we getting more right than wrong? I think we are probably getting more right than wrong. I think the vast majority of companies want to do the right thing and certainly don't want to cause harm. But in a sense, that ledger is
I don't think is determinative. I think too often I sort of find myself participating in these debates, which make me kind of want to chew off my own fist because someone will say, you know, with enormous excitement, know, AI enables this wonderful thing. Don't you love that thing? I go, yes, I love that thing. And they go, well, it's like, you know, having a balanced investment portfolio. Then you have to accept.
that AI is also causing all of these harms as well. And that's just not the way the law works, but it's also not the way common sense works either, right? Like why would we have to accept some unnecessary harm on the basis that the technology in question can also deliver something that is genuinely good? I think we can do better than that. I think we can sort of lean into those
genuinely exciting ⁓ elements of AI while pushing back against the risks and threats.
Dr Miah Hammond-Errey (15:45)
there's so many fascinating intersections here where AI does seem to sit outside so many of our existing legislative frameworks, even though they could be applied. Why do think that is?
Prof Edward Santow (15:59)
I mean, part of it is some really good lobbying by ⁓ some people in the tech industry who've kind of called into question whether existing law, the vast majority of which is technology neutral, applies to AI. And I think what they've sometimes done is they've extrapolated from something that is highly unusual, like, ⁓ you know, self-driving cars. said, ⁓ all AI is like that. And actually, that's just not true.
I can accept that there are some kind of ⁓ products and services that rely heavily on AI, self-driving cars might be good example where the ⁓ existing law basically doesn't, needs to be updated at the very least to kind of properly ⁓ address that particular scenario. But the vast, vast majority of uses of AI are nothing like that at all. They take an activity that we've always been doing or have been doing for many, many years.
⁓ And they just create a new way of doing it. And what our law basically says, so to give it a human rights example, our anti-discrimination law says you can't discriminate against someone on the basis of their sex or their gender or their race or their age. And it also says, I guess, impliedly, ⁓ that we don't care whether that discrimination happens because you're a human who has bad intent.
or whether you are using a piece of technology that has been designed poorly, it is just as unlawful if you discriminate using either of those methods. But we've kind of found ourselves in this situation where if it's the human doing the bad thing, we kind of go, well, it's clearly unlawful. But if it is ⁓ an AI system that has led to this unlawful outcome, we start to kind of stroke our chin. So we say, isn't this an interesting ethical question?
And we go, no, it's not an ethical question at all. And it's not particularly interesting. It's definitely a legal problem. And we should continue to categorise it that way.
Dr Miah Hammond-Errey (18:06)
agree. But there does seem to be so much discussion about
the values of, whether it's US or China or European Union, and where the application of law sits relative to those values, which I find to be quite a fascinating development, because values underpin legislative and regulatory frameworks,
And so it's an interesting way to reframe the discussion. But it's something that I'm seeing more and more in terms of if you believe in, liberal democracy, you need to go in this path. And if you if you you know, your anti oppressive regimes, you need to approach regulation in this way, which as you said, comes back to this sort of
regulation is good or bad. And as we know, binaries are not helpful in improving human rights or national security for that matter. They don't they don't tend towards good outcomes.
Prof Edward Santow (18:55)
think that's right. And just on that, I think it's such a seductive idea, particularly for someone like me, right? Like that you put front and centre the discussion about values or ethics or what is right or wrong. And you say, we need to talk more about that when it comes to AI. ⁓ Gosh, you you had me at hello when you say something like that. The problem with that is what is the kind of
natural implication from that? Well, it is that we are having a discussion where the rules don't exist. The reason we have to talk about the ethics and values and so on is there's this kind of underlying ⁓ idea that there are no rules that actually govern that. And that, think, is deeply problematic because it kind of sweeps away a whole bunch of rules that are applicable but that are not being rigorously applied.
I think what we need to do is spend a lot more time and energy asking the question, how can we apply those laws that, are applicable, but that we are not properly enforcing?
Dr Miah Hammond-Errey (20:03)
Yeah, it's a really, it is a really interesting question. And one that one that doesn't necessarily reconcile quite so easily with the rapid adoption of AI, particularly for a country like Australia, where some of those laws, you know, anti discrimination laws, you know, anti deformation, like a whole bunch of our legislative frameworks competition.
should be applied and could be applied, but for some reason, that is either very slow, which is, I mean, natural, that is sort of changes, and application is not known for its speed. But that is kind of complicating this, counterbalance push for rapid adoption in ways that, perhaps don't fit within those existing frameworks.
Prof Edward Santow (20:46)
Yeah, you're great.
Dr Miah Hammond-Errey (20:47)
It feels like there's a growing number of areas where a focus on human rights are needed to help us respond to AI in ways that, you know, as you've highlighted in your book, advance the ideas of dignity, equality and respect. as individuals? What can we do to contribute to this?
Prof Edward Santow (21:06)
There are a couple of things. The first is we need to understand the technology. I'm conscious that that can be quite a daunting idea because of what it could sound like is we all need to turn ourselves into PhD level scientists. And that's definitely not what I'm saying. What I am saying is, you know, previous generations
⁓ were taught mathematics, even when they weren't very good at mathematics. when I was at school, maths was compulsory right up until the bitter end in year 12. And I remember teachers saying, need to understand this because mathematics describes so much of the world around you. the buildings that
you you go through, or you work in, you live in, they won't stand up unless you understand some pretty basic mathematical principles or fundamental mathematical principles. So increasingly, our world is, heavily influenced in some ways, partly defined by the ways in which we use artificial intelligence
If we were to remain kind of completely kind of ignorant about how that works, ⁓ then we will continue to understand AI only by reference to metaphors that are increasingly kind of becoming problematic. So when we talk about AI in kind of anthropomorphic terms, we describe it as if it were a person or an animal, we're doing that because we don't understand it.
right? We're doing that because the only way we can understand it is by reference to a metaphor that refers to something that we do understand, right? And that can only take us so far. So we talk about a strategic understanding of AI, or a kind of a minimum viable understanding when it comes to AI. And I think that's useful because
⁓ It basically says, what are the things we need to understand to allow us to make good decisions? The second thing, I think, is we then need to make those good decisions. So in the way in which we ourselves, you know, use AI, in the way we kind of ⁓ might talk about it with the people that we influence at work.
our families, that sort of thing. We need to kind of exhibit sensible behaviour. Now, you've actually got to put that into practice and that can be difficult, that can be challenging. And then thirdly, I think we need to think about what our role is as individuals in the polity. And that means,
taking that balanced approach where we really see, as I say, those important opportunities that AI has for making our world more inclusive, for fostering economic development, for fostering productivity, because those things are real. And we are equally clear-eyed about where the risks and threats are and the need for government and business to do better in understanding and addressing those risks.
Dr Miah Hammond-Errey (23:56)
going to ask next what what can government and business do better and in particular with a focus on data and data protection which is a significant concern for Australians and
the availability of public data on individuals means that even new startups in Australia are really forced into using that ecosystem. There's really not many alternatives to ecosystems that harvest data in that way. Given it's such a primary concern, how can we move the dial on this?
Prof Edward Santow (24:29)
Yeah, I wonder if I can start by giving a very oblique answer to your question. This is Privacy Awareness Week in Australia, And so I was invited earlier in the week to kind of reflect an event on Privacy Awareness Week about the history when it comes to privacy. And I talked a bit about the Hollerith
machine, is ⁓ something that I'm used to that a bit in the book that Daniel and I wrote. So it goes back to the early 20th century. It was a kind of a proto computer that was developed in the United States by a man called Holle Rith. And the idea was that ⁓ it enabled the census to be carried out in the US.
at greater than human speed. used all kinds of old fashioned punch cards and all kinds of things, but it was a huge change. It allowed information to be processed at 30, 40, 50 times the speed that any individual highly skilled census worker could do. So hugely impressive, wonderful innovation, absolutely fantastic. So good that it was exported to various different countries. In one of the first countries, it was exported to
in the 1930s was Germany. And the Nazi regime used it initially, I guess, in much the same way that it been used in the United States. But as their malign intent towards the Jewish population, as well as others who were terribly, terribly persecuted, as that malign intent ⁓ turned into action,
the effect of the Hollerith machine allowed the Nazi regime to use people's personal information against them in kind of the most horrific, almost unimaginably terrible ways that enabled many more people to be rounded up and murdered than would otherwise have been possible.
as we move now well into the 21st century with much more sophisticated
machines that allow us to wrangle personal data. We need to be absolutely laser focused on kind of getting that targeted approach, which enables that to be used for good uses, but that also really kind of makes it more difficult for the machines themselves to be misused. And then when they are misused, it will inevitably happen from time to time that there are really good mechanisms in place that government
kind of operationalises to protect the community from unnecessary harm.
Dr Miah Hammond-Errey (27:00)
was thinking about domestic questions when I asked that question. it would be remiss of me not to mention that obviously, that is occurring digitally around the world in a number of places at the moment. And it is such a is such a tense and challenging topic to talk about, but the way that technology and particularly public information is being used to target individuals.
requires a global response. And how do we start to step to that table when it feels like a global fracturing on this particular issue?
Prof Edward Santow (27:30)
That is what kind of I walked away from the summit in Paris feeling most worried about. That there are, along with all of the economic opportunities that people are rightly excited about, there are genuine problems and they're not risks in the sense that they might one day occur. As you point out, Miah, these are things that are happening right now in places like Xinjiang, but also in other places, including in Australia, where AI has been misused or it's gone wrong to cause terrible injustice.
Dr Miah Hammond-Errey (27:46)
happening.
Prof Edward Santow (27:59)
yeah, there are sort of multiple ways in which we need to address that problem. One is that kind of whack-a-mole approach where you have good accountability mechanisms that identify the problem and then seek to remediate. But I think you make a good point, which is that unless you take a global approach to this, then you're left only with whack-a-mole. ⁓ And that's a losing battle, right? And so it would be so much better if we were
of living in a world where the prospect of ⁓ achieving global consensus on issues that really matter to us as humanity were greater. I think we need to recognise that reality, do what we can in
⁓ in today and try and shift that for the future so that there is a greater prospect of taking the kind of global action that is necessary to address those threats more holistically. maybe I'd make one more point. It's, that's probably a bit bleak because there is great work that is happening internationally.
But if you think about some of the good work that's being done on international technical standards by the ISO, IEEE and other bodies, that's actually really, really positive. ⁓ It tends to be civil society expert and sometimes industry led. And that's a good thing. Does it fully solve the problem? Clearly not. But there is good work that is being done internationally, notwithstanding that it's a much more contested space.
Dr Miah Hammond-Errey (29:25)
go to a segment on alliances. How can
nations, perhaps great powers, but perhaps a broader set of nations work together on issues like AI, and what alliances will be most important for Australia in relation to AI.
Prof Edward Santow (29:38)
I mean, one thing that gave me hope from the AI summit was there was one thing where countries did almost universally agree. And that was the idea of public interest AI.
So the Public Interest AI Initiative, which was particularly driven strongly by the French government, but lots of other countries contributed to, think that.
ended up with an initial endowment of about 500 million euros. The idea is that over the next five years that that will be added to by about the same amount every year. And that spoke to me to, I guess, ⁓ this understanding that we do need to do more in that positive space.
And so the fact that countries did come together quite positively on that gives me a bit of hope.
Dr Miah Hammond-Errey (30:29)
to go to another segment. What are some of the interdependencies and vulnerabilities of AI you wish were better understood?
Prof Edward Santow (30:35)
I think when we talk about AI, too often we think about it as something that is disconnected from humans. to repeat the quotation that's been used many times before, AI is actually just humans all the way down. when people talk about AI as
synonymous with automation. I guess what they're doing is that they're taking the human out of the picture in a way that really reduces our agency, our ability to actually kind of determine our future. And that's not true. Most AI systems preserve a very important role or maybe even a series of roles for humans if we take them. And so the kind of the interdependency there
is really the kind of human and the machine. And ⁓ it becomes a vulnerability if we're just kind of handing over the keys to the car to the machine, where what we really should be doing is thinking about how we can operate in a kind of a more symbiotic way.
Dr Miah Hammond-Errey (31:40)
What do you see as the most interesting or concerning technology and security trends?
Prof Edward Santow (31:45)
I think it's a very long trend, but I think this idea that we just need to kind of give humans more and more opportunities to opt out or to kind of exercise some kind of control or oversight over the way in which our personal information is being used. I think it's such an overstated part of the solution.
The trend that it's all about control and oversight from individuals, I think, puts way too much ⁓ responsibility and pressure on us as individuals, particularly in circumstances where it's actually difficult to something, even in the unlikely event you do see something going wrong. And it takes the responsibility off those companies and government bodies from just doing the right thing, from actually respecting ⁓ you and your personal information.
Dr Miah Hammond-Errey (32:37)
Yeah, I absolutely agree. How do you frame AI for business and government when there are so many loud voices at the polar ends of the spectrum?
Prof Edward Santow (32:46)
Yeah, it's a good question. I think you frame it by reference to something that if the following things are true, could be great. And so this idea that AI is inherently good or inherently bad, I think is just so immature that we need to move away from that.
If you say to a minister, you can't do this because, that's pretty limiting. But if you say, you can do this if, then I think that's a much more kind of empowering way of offering kind of a way forward. And so when it comes to AI, think you can say, well, you can achieve.
fantastic productivity and other benefits from AI. And you can do so in a way that is consistent with our values as a country, if you do X,Y and ZED. And I think that framing is helpful.
Dr Miah Hammond-Errey (33:39)
I'm gonna go to a segment emerging tech for emerging leaders. What do you see as the biggest challenge for leaders in the current AI environment?
Prof Edward Santow (33:49)
I think the typical answer to that question is just keeping on top of the development.
So I think that that is the conventional answer and it's conventional answer for a good reason, it's because it's true. But I think that there's another problem that kind of rests sort of in the same space and that is this idea that as a leader in an organisation, you have to be kind of everything, like you have to be the absolute best expert on everything. And the thing that surprises me about
some companies, that do well with the AI transformation is that often the leader is not someone like that, is not some amazing maven when it comes to AI. It's someone who really, really understands their business or really understands their organisation. And so they keep on saying, ⁓ you know, to the kind of people who wear skivvies who come and, you know, sell them tech products and AI products and services, explain it to me so I understand it.
explain it to me so I understand it. And that's actually a really good way of, I think, leading in this space. So you're kind of breaking it down and until you understand it, you don't use it.
Dr Miah Hammond-Errey (34:58)
you've just solved the problem with Australians being skeptical. They're waiting for the answers that they understand. Coming up is eyes and ears. What have you been reading, listening to or watching lately that might be of interest to my audience?
Prof Edward Santow (35:09)
Yeah, look, see, this is sort of an embarrassing question. I have four young children. ⁓ And my partner kind of once walked in on me when I was reading to, I think our six month old at the time, a book about the Holocaust. And I thought it was okay, because, you know, a six month old doesn't speak English.
doesn't speak any language. But she still thought, and perhaps rightly, that that was an unhealthy thing to do. And so I accept that. And now I find myself again, well this time reading a series of books about the gulags in Soviet Russia. And I kind of feel like maybe, you know, even with the passage of time, I need to kind of move a little bit out of that.
kind of very depressing kind space. ⁓ And so that's Solzhenitsyn I'm reading.
Dr Miah Hammond-Errey (35:57)
You're also in...
Prof Edward Santow (36:00)
Go on, go on.
Dr Miah Hammond-Errey (36:01)
You've also implying that you're not taking your partner's influence particularly well.
Prof Edward Santow (36:05)
I agreed with her. I 100 % agreed with her in the moment. The problem was that I didn't act on it. that there's sometimes like this sort of tension between, yeah, exactly, exactly. Yeah, the theory and the practice are not always perfectly kind of aligned.
Dr Miah Hammond-Errey (36:13)
Behavioral change is hard.
I've got a segment called disconnect, how do you wind down and unplug?
Prof Edward Santow (36:26)
Well, with four kids, I spend quite a bit of time with them and they're pretty good at being unplugged. And they all happen to be very kind of active. And there is something for me when I've got a pretty busy day job that is really, really important about kind of not being on devices and they're not at this stage at least.
of constantly pressing us for the access to screen time. Instead, they just want to run, they just want to go outside. We're lucky enough to have a couple of parks in walking distance to where we live. And so I do find myself often in the park, just kind of pushing a swing or having a soccer ball kicked at me. sometimes it's not, know, like before it happens, it's not really what I want to be doing.
But pretty much every time when I'm there, I do feel like this is a kind of a more connected way of living than kind of sitting in front of a computer screen typing a long document that I'm not enjoying typing and I'm sure no one really wants to read.
Dr Miah Hammond-Errey (37:24)
kids have a way of grounding you in the moment there is something so special about the presence that they demand. If you can find yourself appreciating it. My last segment is called need to know is there anything I didn't ask that would have been great to cover?
I want to go back on something. the information environment and the way that manipulation of information environment, particularly through AI, is having an outsized impact on security and democratic threats. I know you've written about that in your book.
Prof Edward Santow (37:56)
Yeah, so Daniel Miller and I talked about the pollution of the information environment ⁓ that is certainly a feature of the age of AI and the age of social media. And there was something that really surprised me.
the leading research on this suggests that there's very little ⁓ actual change in the total number of lies or hoaxes or whatever that is out there. The thing that has changed is our ability to find an authoritative source that debunks the hoax or the lie. So, I was really surprised by that. So where
Previously you had newspapers and politicians sometimes, maybe even university academics who could be seen as of canonical sources of truth. ⁓ That's no longer the case, right? Like we often see in public life this kind of really strong pushback against experts and experts is often kind of said with air quotes around it. ⁓
In some ways that scepticism is well placed. There are many experts ⁓ historically and today who have kind of misused their position. I accept that. But ⁓ there are many more that have not. And so this idea that a lie or an untruth is increasingly difficult to debunk, I think is terrible for our polity and something that we need to do
better at addressing.
Where we're left is a really bad place, which is a polluted information environment where mis and disinformation can thrive and can influence people into a skewed view of how the world really is.
Dr Miah Hammond-Errey (39:40)
It's such an important point, because what we're effectively saying, what you're saying here is that the old gatekeepers, academics, newspapers, politicians, they gate kept information, we've replaced them with an inability to verify truth, we're no longer able to come to consensus on shared facts. And to me, that's an existential democratic crisis. If we can no longer share fact and experience together. How can we create unity and consensus and deliberate and have a democracy?
I think it's a problem we're going to have to confront. And unfortunately, I think we need to do it very quickly, which is why I wanted to get your thoughts on where can we go now?
Prof Edward Santow (40:23)
you've got a domain, called social media, where it's a source of critical information for many, many people. It might be the primary source. For some people, it's the only source of critical information that they don't experience to their own eyes about the world, about politics, about companies, about the rest of the world. And
it's largely kind of unregulated, the truth or otherwise of what's said there. And you contrast that to newspapers and TV news and even a lot of online news where there is regulation. It's imperfect, right? But there are genuine consequences that arise where a kind of a lie is peddled in those places. And that seems to me to be ⁓
very strange situation for us to accept. And I think, while I accept that it's not a popular view in the world and certainly not in the US and not particularly popular here in Australia. I commend the Australian government prior to the election on having an attempt to address that problem. It didn't succeed. I think they need to have another go.
Dr Miah Hammond-Errey (41:33)
Ed, thank you so much for joining me. It's been a real pleasure to chat.
Prof Edward Santow (41:36)
Thank you. It's been my pleasure as well.