Technology and Security

In this episode of the Technology & Security podcast, host Dr. Miah Hammond-Errey is joined by lawyer and digital rights activist, Lizzie O’Shea. This episode explores Australia’s technology debates from a security and legal lens—addressing copyright, creativity, AI, and the legal structures, including class action, that shape society and security. We discuss how so often in the AI discussion we are asked to make trade-offs about immense future potential with real present harms in the now. This episode breaks down why proposals to let large language models freely train on the copyrighted works of Australians have rattled artists, news media, and civil society. Lizzie explains the Productivity Commission’s push for a data mining exemption, unpacks strong community reaction, the distinction between fair use and fair dealing and highlights what’s at stake for creative industry sustainability and fair compensation in the digital age.
 
We also explore recent legal action against Google and Apple–in Australia–and  the breadth of big tech legal and enforcement action globally, and what this means. The episode also covers the changing nature of US and Chinese AI strategies and approaches to the Indo Pacific, as well as an increase in big tech spending in Australian policy and research landscape. We explore the vulnerability of allowing mass data collection, noting that while data minimisation, and prioritising strong cybersecurity are understood priorities we question whether they are they really supported by legislative regimes. We discuss the significance of incentivising feedback in AI systems to integrate them into businesses in productive ways and crafting successful narratives for cautious adoption of AI. Finally, we look at why litigation has become central to holding digital giants accountable, and how Australians’ blend of healthy scepticism and tech enthusiasm might finally force smarter AI regulation. The conversation highlights how quick fixes and premature adoption, risk deeper, lasting social harms and national security threats.
 
Resources mentioned in the recording:
 
·       Future Histories, What Ada Lovelace, Tom Paine, and the Paris Commune Can Teach Us about Digital Technology, by Lizzie O’Shea, Shortlisted for the Victorian Premier’s Literary Awards 2020 Award. https://lizzieoshea.com/future-histories/
·       Burning Platforms podcast, https://percapita.org.au/podcasts/
·       Empire of AI by Karen Hao 
·       Digital Rights Watch https://digitalrightswatch.org.au
 
This podcast was recorded on the lands of the Gadigal people, and we pay our respects to their Elders past, present and emerging. We acknowledge their continuing connection to land, sea and community, and extend that respect to all Aboriginal and Torres Strait Islander people.
 
Thanks to the talents of those involved. Music by Dr Paul Mac and production by Elliott Brennan. 

What is Technology and Security?

Technology and Security (TS) explores the intersections of emerging technologies and security. It is hosted by Dr Miah Hammond-Errey. Each month, experts in technology and security join Miah to discuss pressing issues, policy debates, international developments, and share leadership and career advice. https://miahhe.com/about-ts | https://stratfutures.com

Technology & Security episode w Lizzie O’Shea
Dr Miah Hammond-Errey (00:00)
My guest today is Lizzie O'Shea. Lizzie O'Shea is a lawyer and author. She publishes and speaks regularly about law, technology and human rights. She's a principal lawyer at Morris Blackburn, She founded and chairs Digital Rights Watch, authored two books and contributed to multiple others. She's also won numerous prizes and awards. Thank you so much for joining me today, Lizzie.

Lizzie O'Shea (00:21)
Thank you so much for having me. It's great to be here.

Dr Miah Hammond-Errey (00:23)
We're coming to you today from the lands of the Gadigal people. I pay my respects to elders past, present and emerging and acknowledge their continuing connection to land, and community.

In your book, Future Histories, you argued that we need to stop looking forward and start looking backwards. In the book, you construct what you call a usable past that helps us to determine our digital future.

five years on in today's technology discussions, where are we too focused on the future at the expense of the present?

Lizzie O'Shea (00:51)
that's such a good question. So I wrote the book with the idea being that a lot of the debates that we have about technology are decontextualized. They take place in a vacuum as though we've never had these discussions before, how to balance issues like privacy and security, how we think about forward frontier technologies versus consolidating innovations that we've already got or...

how we think about ⁓ the risks that come from design and how we can best ensure protections throughout the process or bolted on at the end. So these kinds of discussions are treated as unprecedented in some ways, especially when it comes to specific forms of technology like social media or AI or whatever it may be. And so the idea of the book was to say, well, actually we've had these debates many times in the past that a lot of the questions we have about technology now

⁓ questions about human behavior, about social relations, about how history moves through time, how people make and shape their own destiny. And there's a lot of debates and discussions and movements that have come in the past that have sought to grapple with these exact issues. So the purpose of the book was to shine a light on some of those and hopefully make them relevant to discussions about technology. And a lot of the things that I've talked about, you noted that it's now

five years, six years since the book was published, but I like to think the general premise holds true, that we're best placed to make a future that is positive for the many rather than the few if we do think about how to democratize technology, change the power dynamics that give rise to how it's designed and developed.

and also understand that people have a desire to shape their own future and we should give them the opportunity to do that. Where I think probably we're still focusing far too much on the future, is around the discussion on AI. In part it's hard to even identify what we mean when we talk about AI. So it's the complexity of having that kind of empty signifier as the

as the shaping terminology for the debate. But I think we talk a lot about the potential benefits of this technology. If you look at any of the comments coming out of the major frontier, ⁓ Gen. companies, they'll talk about how this is going to be an everything machine that solves all our problems. And the real reality of what it's like to build that,

Dr Miah Hammond-Errey (03:05)
you

Lizzie O'Shea (03:10)
What does the material infrastructure of that technology look like? What price do we pay in building that at the expense of solving other problems in the near term? ⁓ What is the cost? What is the potential problems we build into this tech if we don't do it carefully? ⁓ That I think is, it's increasingly clear that's a serious tension within the discussion around AI. This huge drive to recognize what is perceived as being inevitable productivity gains.

it's shaped as this technology that will solve climate change. So we don't need to worry about all the climate impact of building out data centers or collecting information and processing it quickly and having all that infrastructure in place. And meanwhile, the realities of that technology looks very different for people upon whom it's being imposed. So

we can become obsessed with its potential instead of thinking about how we want to shape it in the here and now so that in the future it's a technology that's beneficial.

Dr Miah Hammond-Errey (04:03)
Yeah, because we're being asked to make trade offs about immense future potential with real present harms in the now. And that that tension, particularly in things like the productivity discussion is not always as clear as it could be,

Lizzie O'Shea (04:18)
Yeah, I would agree with that. if we're talking about AI as being large language models, for example, the vast majority of AI projects within enterprise fail, you know, like upwards of 80%, like how you define failure, what does it mean to engage in one of these projects? Like it's a stat that's got a lot of complexity to it. So there's more, I think, to that question.

But what we do see is a form of technology being built and then an expectation that we will all meld how we work, how we behave, how we socialize to the form of that technology because various venture capitalists have invested billions of dollars in this tech and they're now seeking to obtain a return.

which I think is the wrong way to think about it. we should be designing technologies that should be solving real world problems that are designed for us in our society, wherever that may be,

Dr Miah Hammond-Errey (05:07)
in the book, you talk a little bit about the way that the social harms at the time that you were writing were shining lights through the cracks in the digital revolution with a nice little Cohen reference there. And you suggested that our job is to use that light to navigate towards a more democratic digital tomorrow.

Lizzie O'Shea (05:25)
Yeah, that's absolutely my ambition. I do believe that the problems that we have in society will be solved not by technology, but by humans.

and they may make use of technology, but it has to be put in their control. And the more people involved in decision making, anybody will tell you this social research has proven for ages, the more diverse sets of people you have involved in these kinds of decision making, the better outcomes you'll get, right? So democracy does work in the sense that you want a system of government that serves lots and lots of people in their diverse interests. That doesn't mean it's easy or it's straightforward or it always is robust ⁓ and functions as we'd hope, but...

I much prefer that to what we're seeing around the world in other places experimenting with different forms of governance. And certainly in the United States now, especially since Trump took office, we've seen an alignment between a number of technology companies, And the American state to push out an agenda. You know, just in recent days, we saw Trump posting on Truth Social about the...

the hostility he's going to take to anyone that seeks to regulate US companies. And that's a real problem. We also see experimentations with other forms of authoritarian tech in places like China as well. I don't want either of those models. I want to have technology that is for people, right? That involves people in its design that looks at what problems we have and then seeks to solve them rather than building a fancy everything machine that will supposedly solve all our problems when it's not clear that it will.

Dr Miah Hammond-Errey (06:51)
I want to go to a segment. What are some of the interdependencies and vulnerabilities of technology, security and law that you wish were better understood?

Lizzie O'Shea (06:59)
Yeah, this is an interesting question I thought about for a little bit because there's a couple of different answers. But one of the things that was the reason that Digital Rights Watch was created was in 2016, you might recall, we were debating the introduction of a metadata retention regime.

And that got up, so it was one of the most ⁓ extreme, I mean it was three years of metadata retention that could be accessed without a warrant. So it was a regime at the time that was relatively ⁓ at the far end of what was made possible. And it's been interesting for me to watch how that's unfolded in the last 10 years since I've been working with Digital Rights Watch.

and looking at how these kinds of initiatives like legislative regimes to dictate how surveillance will be done have changed over time. our concerns around the metadata retention regime.

I think were based on concerns about cyber security among other things, concerns about privacy as well, but particularly how mass data accumulation creates a cyber security risk. And that's true for also things like zero day vulnerabilities and the use of them by government. So in the book, I talk about the example of the WannaCry worm and how that was a vulnerability in, I think it was Microsoft systems that was, that was known by the NSA, but kept secret until it was released. And then it created all sorts of other problems.

There's an interesting interdependence of vulnerability. We've created a vulnerability for ourselves by allowing mass data collection, prioritizing national security interests at times over security and privacy. And in the last 10 years, what I think we've seen is a change in that dynamic. But now it feels like much more data minimization, prioritizing strong cybersecurity is understood as a priority.

when it comes to national security in a way that perhaps it wasn't 10 years ago.

Dr Miah Hammond-Errey (08:47)
I see the interdependency. Absolutely. I think in part, there's a changing nature between digital infrastructure and digital services, personal data, cybersecurity and the state. I think we're in a transition phase now, you as you alluded to earlier with this kind of tying of US tech companies and the US state power, particularly in relation to digital services regulation, tariffs and adoption of AI. We need national security experts, we need privacy, you know, we need a lot of disparate groups to come together to actually find something that works for Australians.

Lizzie O'Shea (09:55)
I have to say, and I see you as one of the bridges between those communities in many ways, which I think is a real credit to you because they're quite disparate. But yeah, I also think that's true in relation to say China.

Dr Miah Hammond-Errey (09:55)
Yeah.

Lizzie O'Shea (10:07)
China's approach to AI versus the US and how the rhetoric that comes out of China, at least for our Pacific neighbors, is about openness, is about national sovereignty, is about preserving your capacity to shape these models of how you choose, which is in sharp contrast to the US, right? And so Australia has an interesting

Dr Miah Hammond-Errey (10:24)
What's in sharp contrast to the contemporary US? mean, you know, some period of time ago what you're seeing in the Chinese, ⁓ you know, foreign policy on tech is actually something that we would have expected to come out of the US in the not so distant past.

Lizzie O'Shea (10:28)
US. Correct. That's right. Yes, that's true. That's true. I think the realities for many of our Pacific neighbors, China looks like ⁓ one of the super power you'd rather, ⁓ you know, rather align with because it is, ⁓ you know, at least in rhetoric, I'm not saying in practice, this is true, but at least in rhetoric has a technology policy that is much more focused on empowering those states as compared to the US, right?

I'm not a foreign policy specialist, of course, but I guess your point at the end there that you need other kinds of people in the room to talk about this is the...

the exact right conclusion. Like we need to be able to discuss this internally coming from quite different perspectives because otherwise I think we're going to miss opportunities. We're also going to potentially end up in places we don't want to be.

Dr Miah Hammond-Errey (11:57)
So let's go to the contest spectrum. What is a new cooperation, competition or conflict do you see coming in 2025?

Lizzie O'Shea (12:04)
Yeah, well, in recent times, we've seen on the theme of AI, this desire to facilitate a data mining exemption for US tech companies. So this does lead into some legal action we can talk about. in essence, Australia has copyright protections for ⁓ creatives, for anyone who

you know, who produces copyrighted works, And that protection is geographically determined. So we don't have any large language models processing, you know, teaching their models on copyrighted material in Australia for that reason. And the Productivity Commission in recent weeks just put out a proposal to suggest that we should have a data mining

exemption essentially, which would allow large language models to train on copyrighted works. And that would facilitate the acceleration of this kind of technology and potentially also with some localization benefits because you could train on local cohorts of information. we recently held a briefing on this and invited anybody to come who might be interested in a thousand people, kind of RSVP'd and

whole number of hundreds of people showed up to talk about why they don't like this. And so what I think is interesting is that we have not traditionally done a huge amount of work, Digital Rights Watch, around copyright, but lots of people who are creatives, lots of people who work in news media see the great downside of this potential and don't feel that these companies should be able to take things for free and train their models on it.

And there's a question as to whether the potential benefits of this kind of technology can justify therefore the use of copyrighted works in this way. Like what are going to be the consequences of that for the creative industries? I think it's fair to say that it could be very damaging, as in how will you make money from producing creative works, except in small cohorts if this

can be taken for free and used by others to create machines that build works that look like yours, right? That look like creative work. So there's this real question about what it would do to the creative industries. And then there's this real question about whether we want or we think it's justified that large tech companies that have built these models can do this with people's information. And so what I see as an interesting alliance is that we as digital rights activists will be working with a whole bunch of creatives, news media companies.

So how are we gonna build a sustainable creative industry? That's the kind of conversation I wanna have with people over the next 12 months and also build a policy environment that

effectively fairly compensates creatives, ⁓ allows large language models to train in fair and transparent and accountable ways using Australian content. That's, I think, going to be an interesting alliance. I mean, it's a conflict as well as a potential set of cooperation, potential cooperation. Exactly, bit of both.

Dr Miah Hammond-Errey (14:47)
Yeah.

Perhaps a conflict first and then some cooperation. ⁓

Australia's got a reputation for what you've put as regulatory zeal and

kind of like you, I don't always agree with the specifics of attempted reforms. But I do really appreciate policymakers attempting to address matters that Australians are really concerned about, and creating a future that can work for Australians and Australian national security and interests. There's a couple of tensions at play though. we're seeing a significant increase in tech money flowing into the Australian policy landscape.

How do you think that will impact the Australian technology policy environment, this increase in largely big tech money?

Lizzie O'Shea (15:37)
Yeah, so Big Tech always has a lot of money and lobbying on its side. And I also think they're in an interesting phase of their development. There's a lot of speculation now, that this might be as good as it gets for ChatGPT, which is release number five. Like it's possible that we won't see these great leaps forward,

The point being that huge amounts of money have been invested in

a number of these models and the question is how do they recover that investment and I think they are going around asking governments to create regulatory environments that suit them for that purpose. So that is a problem I think for democracy because they are very powerful and influential, they have a lot of money, they fund a lot of organisations as well to do this work and there's an imbalance because civil society in Australia

that isn't funded by Big Tech is very small. There's a lot of money in research as well. I'm not saying that this is a particular problem of Big Tech, to be honest, but this is a traditional problem when there are large concentrated.

Dr Miah Hammond-Errey (16:39)
Yeah.

Lizzie O'Shea (16:40)
industries with a small number of organizations who want to maintain monopolistic power. Like we've seen this before to riff on a theme of my book, right?

But I do think it's fair to say that there's something that's slightly alarming about the level of lobbying. mean, these companies are

some of the best resourced in human history, some of the smallest in terms of numbers of people who work for them relative to their size. Some of them, they're very well protected in a bunch of ways, like as in they've got the capacity to build moats around themselves to avoid competitors.

You are contending with ⁓ something that looks, the next level up from what we've seen before. it's right to identify this as being a serious problem. So we should take it very, very seriously, I think. And we should be spending our time trying to find ways to remind public servants and elected representatives that the people that they work for are not those companies, they're Australian people.

And we need to find ways to make sure that the benefits of any advances in technology are retained here and the accountability also exists so that it's not just a matter of foreign companies extracting value and taking it overseas.

Dr Miah Hammond-Errey (17:52)
let's go to legal action. Earlier this month in a landmark decision, the federal court ruled that Apple and Google engaged in anti competitive conduct. Can you tell us a little bit about this case, why it's important and what future legal action it clears the way for?

Lizzie O'Shea (18:06)
Yes, so I did work on this case. So my other life is that I am a class actions litigator at Morris Blackburn. I ⁓ work on a number of different class actions at any one time and our ambition, we only do plaintiff class action. So we only represent people as against large companies or governments to recover compensation for harm that they've suffered. So this action originated with Epic Games as people may know. If you're a gamer, I'm not a gamer. Are you a gamer, Mia? No.

No, okay. Well, anyway, I'm sure many of you are looking for Okay, think some people play it. So yeah, Epic Games basically sued both Apple and Google for the restrictive ways in which they impose conditions on people that put their apps on their app stores. So the app store in relation to Apple and the play store in relation to Google.

Dr Miah Hammond-Errey (18:36)
But I have heard of Fortnite, so you know.

Lizzie O'Shea (19:00)
So Apple and Google both take a cut of any payment that is made through an app. And this was perceived as anti-competitive, the exploitation of market power to impose unfair practice, right? So the case revolved around that. What I was part of is the class action. So for anyone who was a developer or a consumer who purchased anything through the app store. So the actions were heard together.

But in essence, were arguing about slightly but often very similar things about whether Apple and Google exploited their position in the market to extract an unfair advantage.

So in essence what has occurred is that this was found to be true that they'd breached the Competition and Consumer Act, that they're in breach because of their monopolistic power and that that

meant that they are overcharging people in relation to payments made through the App Store. How much they overcharge people is not entirely clear. That's going to be left to what is called a second stage trial. So usually when you have a class action, you determine all the common issues in relation to the class first, and then you determine how much that might mean people have lost and people have to be paid as a result of that decision. So we've got through the first stage, which is significant.

I suspect, but I don't know that there might be an appeal because of course this is a big deal for both companies.

Like lot of the times when I'm bringing a class action like this, people instinctively understand that this is the way in which tech companies behave, but it can obviously take some time for litigation to shake that out and figure out what the actual problem might be and what the consequences of that might be as well. Now, I think there's a lot of criticism of the law taking too long to catch up, but

What I think is also interesting is that litigation is now increasingly how these companies are being held to account, both here, I mean, this is a bit of litigation that's being run in lots of parts of the world, like so the United States and the UK, but also like what we talking about before, Anthropics being sued in the United States for copyright infringement, ⁓ not for the books that they bought legitimately, but for books that they trained their large language model on that they obtained through pirated editions, through pirate libraries and the like.

So there's a lot of criticism of courts taking too long to resolve some of these problems. But what I think is interesting is they often end up being quicker than other kinds of regulators at times. And they do find a way to solve, ⁓ to solve the problems, I suppose, that many people instinctively understand.

exist within the industry.

I think this is going to be interesting to watch how some of the courts continue to advance what many people understand to be true, which is that companies need, large tech companies need to be held to account for breaking rules that they know they shouldn't have.

Dr Miah Hammond-Errey (21:47)
tied a few really interesting threads in there. It's kind of want to pull on that idea of

speed and the fact that, you know, there are so many complaints, I guess, that, you know, policymakers don't keep up that, you know, the law doesn't keep up that like enforcement and security don't keep up, you know, that there is a general feeling that the pace of technology development is so fast, that we should somehow try to keep up or create alternative mechanisms.

I think it is interesting. are there ways of improving not the speed, but the outcomes by better integrating our arms of government?

Lizzie O'Shea (22:25)
That's a very valid question, I think one that our arms of government should be asking themselves as well. In my experience, when we talk about legislative reform, one of the concerns that other stakeholders tend to have is that terms may be too flexible abroad or able to be evaded. I'll get...

give you a couple of examples, like a duty of care for digital platforms, for example, or particularly social media companies. Is that to broad what constitutes reasonable steps to carry out that duty of care ⁓ or a fair and reasonable test in the Privacy Act, which is one of the proposals that came out of the review of the Privacy Act is to introduce a fair and reasonable test. Yes, but isn't that something that they could wriggle out of? Like what's the minimum requirement of fair and reasonable? And actually, in my view, that kind of open-ended wording

is very valuable because it allows the interpretation of it to be flexible and adapt to the times. It also means that companies that are, I think, responsible, you know, but I think it's a bare minimum of responsibility. Most companies will feel the obligation to ⁓ comply with that in the higher end of what is considered compliance. They will look to behave ⁓

treat it as more onerous, even that maybe it will be out of caution, right? So there's some ways in which you might never be able to regulate in a way to protect yourself from nefarious companies. There is always going to be bad behavior and that's when you need enforcement capabilities with regulators to be available, to be able to be implemented quickly. But more generally, I think introducing flexible concepts.

into legislative regimes that can adapt over time through court decisions that can happen quickly, whatever it may be, that have as little, the least amount of friction as possible in getting to a decision. ⁓ If you can implement a regime like that, I think you're putting yourself in the best position to be able to regulate these industries.

Like I think about the Privacy Act, we haven't had a meaningful, I mean, we've had some reforms in recent, in the last year or so, but we haven't had meaningful fundamental reform to that act in, I don't know, 40 years. So if we're introduce some concepts, I don't wanna wait another 40 years before they become relevant again. Like we need to make sure they can stand the test of time.

Dr Miah Hammond-Errey (24:48)
Yeah.

the on privacy, I mean, something like 80 % of Australians want action by government on privacy. Many companies in Australia also operate within the existing regime and want to improve, especially based on you know, large scale cyber

breaches and security concerns, multiple companies. And yet for some reason, the challenge has been about meaningful reform in that we've had small scale reform around the side. But the issue about what constitutes data about a person has not actually been

in any way clarified.

Lizzie O'Shea (25:25)
Yeah.

Yeah, I agree. Like the political process is what's hampering it, right?

Dr Miah Hammond-Errey (25:30)
for example, the Minnesota shooter was found with reports from multiple data brokers after after shootings, I think it's tragic outcome in what is and has been known about for a very long time. And that's the latest in a string of possibilities.

Lizzie O'Shea (25:47)
Yeah, the only thing I can perceive is what is going on is that there's this sense that it would be expensive to the economy. That's, I think, what is going on, right? A sense that it would limit productivity, that it would be costly. And in some ways, I feel that's a misunderstanding of the situation because it's extremely costly now, not having.

stronger privacy protections. I think cyber security generally, mass data breaches, identity theft and the like, these are big problems. The amount of money that companies have to spend to protect ⁓ their...

their data holdings because they're required to collect them by government, for example, is also another cost. If you engage in data minimisation, you can get economic benefits that are significant. And I would argue it outweigh the benefits of holding a bunch of stale data, or whatever it may be. I think the way that it's been done is to suggest that this is a form of regulation that's expensive.

I think that really misunderstands the potential of privacy reform over the long term and the short term for us as individuals, but also us as a nation. I mean, most small businesses don't like the fact that they have to deal with lots of personal information. Like they're not covered by the privacy act. They get sold like worse products because none of...

the products they get off the shelf have to be compliant with the Privacy Act, it would be better if they got sold better products. Small businesses feel the pinch of large companies being data extractive and being up against those business models. I think they would be able to be brought on board with the campaign for improved privacy protections, including bringing them into the tent, because even though that means they're subject to a form of regulation,

It does mean that they'd get a better quality product when they buy things off the shelf. And to be honest, the exemption for small business doesn't exist anywhere else around the world. And most of them are operating just fine. So there's this weird dynamic that's been created by having a low level scheme that the idea is that the expansion of regulation would somehow be all cost without benefit.

Dr Miah Hammond-Errey (27:56)
expansion of the current regulation would be all cost with no benefit though, right? you would need to actually make a substantial improvement to what you define as being about the person. Otherwise, yeah, then you're just subjecting all these small businesses to unnecessary regulation.

Lizzie O'Shea (28:00)
Sure.

Correct. ⁓

Dr Miah Hammond-Errey (28:10)
I really would love to get a bit of a characterization of the major legal battles facing tech companies. what is that breadth of legal challenge looking like? And what does it mean for us in Australia?

Lizzie O'Shea (28:21)
Yeah, for sure. the US has got lots of different litigation on foot at various points. So obviously we're waiting, for example, to see whether there will be an order that Facebook be broken up.

Their role, I suppose, in the US at least, content moderation is another. So Section 230, which deals with content moderation in the United States as to whether a social media company is a publisher or a merely a telephone line for information. There's a question about that.

What is the role of content moderation and responsibility of social media platforms for content? That's also in dispute at the moment through a variety of different litigation. Can it be protected by First Amendment speech? Could you say that social media companies have no responsibility for content moderation because of that? I think that's increasingly less the case, but that remains to be seen. So the nature of large companies, whether they've got a role in content moderation and the legal boundaries around that, those are two issues.

Of course, what I mentioned before is the issue of copyrighted material and training large language models on that. It looks like Anthropics in trouble. They've admitted to training on pirated material. The material that they obtained legally and then trained their large language model on is apparently fine within the scope of what is called fair use in the United States. I'm not sure that would be clear in Australia. We have a slightly more onerous test for the use of material, which is called fair dealing.

It's more limited the exception, than in the US. even if it's found to be fair dealing in the United States, in Australia, it's not clear that that would fall within that exemption. So companies therefore are seeking ⁓ the data mining exemption to allow them to use copyrighted works. But Anthropic may go down for using pirated works, at least, in the United States. And then we do have a range of competition matters. The matter that I talked about before in terms of

Competition in the app store is one, competition in search is another. Also, my firm has just issued a proceeding about competition in the ad tech industry. So looking at Google's role in the ad tech industry, because it has what would be described, I think, as a vertical monopoly in between publishers of ads and people purchasing that ⁓ ad space. these kinds of... ⁓

practices that have been ongoing for the last 10 or 15 years really are now coming under scrutiny from the courts and potentially resulting in a range of decisions that will go against tech companies and could be potentially very expensive, could also fundamentally transform their business model.

Obviously the breakup of these large companies, it remains to be seen whether that will actually occur because you do wonder whether if any of these large companies start facing scrutiny for anti-competitive behaviour that would give rise to a breakup, whether that would be ⁓ subject to intervention from the White House remains to be seen. it'll be interesting to watch how the political dynamics play out if some of these are.

very fundamental to the ways in which these companies run their businesses.

Dr Miah Hammond-Errey (31:22)
I know this will come as a surprise to you. But non lawyers don't always think about legal challenge as a way to shape a more prosperous, equitable, inclusive society.

Lizzie O'Shea (31:33)
How can you not? That's all I think about.

Dr Miah Hammond-Errey (31:36)
And you have touched on it. But I do want to ask, why is legal action in the digital space so important?

Lizzie O'Shea (31:44)
I think there are significant political challenges with regulating tech for the reasons we've discussed as in I think there's this huge pressure to find ways to improve productivity, to deal with cost of living issues, which are the primary concerns of Australian voters at least, but that's probably replicated around the world. So huge concerns around looking like we're over regulating things that might bring about a more promising future. In those contexts,

courts do play a very important role. Sometimes because they extract large amounts of money from these corporations and force them to pay fines or pay compensation to people who've been affected, but also because you can get a structural remedy at times if you're a regulator. So it's partly also empowering regulators to go to court and seek a particular outcome that they might not be able to achieve themselves. So it's not just the case that it's private litigants doing this kind of work. It's often regulators doing it as well.

But yeah, I think the reason courts are so important is that the detail, the scrutiny, ⁓ sometimes also the loose way in which these companies operate all come together quite neatly in a courtroom, shines light, very ⁓ expensive, but very detailed, fulsome light upon the conduct of these companies and then gets a decision. I think that has real potential to have

⁓ to be a form of policy making that is responsive to people's everyday people's concerns, particularly when they're flexible concepts like I mentioned before, but even more generally ideas about ⁓ what is fair in society. We look to the courts to do that. And I think they're much more responsive in the here and now in many ways notwithstanding they're slow to understanding community conceptions of these terms and why the scrutiny of these companies is important.

Dr Miah Hammond-Errey (33:41)
of things I've been really interested in following is how some of the legal challenges, particularly those, taken by the FTC and more broadly in the US, are able to release information which can then be used in other countries. the level at which they then expose the inner workings of companies, and presumably are able to then help other jurisdictions to pursue action.

Lizzie O'Shea (34:02)
Yeah, that's it.

I mean, that's a great point. Yeah, huge amounts of information that come out, particularly regulator actions, like I mentioned before. It's just not possible for everyday people to get access to the kind of information they need to be able to sue a company like this, even if they suspect they're doing the wrong thing, right? You actually need regulators to go in there and obtain the information. And we've seen that here as well.

you absolutely need these regulators that are empowered to get this information because there's almost no other way. Journalists don't have access to it. You know, can't FOI these companies. You need regulators to do that. And then that at times creates their basis for bringing in action. When material is exchanged and discovery or disclosure or whatever it's called around the world, you can get access to huge amounts of information.

And certainly we've made use of information that was obtained through US proceedings.

Dr Miah Hammond-Errey (34:50)
Yeah, I think it's also so critical as we move increasingly into a world which is algorithmically curated, and where we're not necessarily as conscious or able to interrogate the way that our information ecosystem is and our our cognition is being impacted. If you can't see it, you know, you need to find out about it in other ways. before we move on. Australia is at once skeptical of AI.

and also seen as an early adopter of technology and AI in general. How does this shape our regulatory environment?

Lizzie O'Shea (35:22)
Yeah, you're right to say that. So I think we poll either last or second last on optimism and excitement about AI, this is according to KPMG. And then we also poll very highly if on an interest in using the technology, if there were proper assurances in place. So I think it's about 83 % of Australians believe that if there were proper assurances in place, including international standards and the like, they will be more likely to use this technology.

But this is often also put in contrast with the fact that many Australians use it all the time. So I feel like that's a healthy skepticism. They've been shown these kinds of supposed magic beans and they don't think it's that good. They want more assurances that it's working correctly. So I sort of see this as a national asset, this kind of skepticism ⁓ towards this technology because sure, it may be fancy, it's very clean and it looks very intelligent, but

Dr Miah Hammond-Errey (36:11)
Yeah.

Lizzie O'Shea (36:19)
You also want to know that it works well and if you can't see under the hood, can you trust it? And people want to know that people have done the work to make sure it can be trusted. So like I think Australians are well placed therefore to have regulation that moves at the pace of trust as some might describe it. I think assuming there's an inevitable ⁓

benefit that comes from innovation, that this is inevitably opposed to regulation, is not something that most Australians buy. So that's really good position to be in for the purposes of rulemaking.

Dr Miah Hammond-Errey (36:52)
I want to go to a segment called emerging tech for emerging leaders. What do you see as the biggest shifts for leadership kind of

thought of broadly from the introduction of new technologies like AI.

Lizzie O'Shea (37:04)
So, there's a policy now in Australia, I think, to push this out as much as possible. And there's an assumption that if people aren't embracing these kinds of technology, it's because ⁓ they're not as smart as everybody else. So the smart ones are embracing it as quickly as possible. And I actually think this is a real problem for leadership because there's huge pressure to demonstrate that your enterprise, that your government department is progressing with technology as quickly as possible.

Dr Miah Hammond-Errey (37:30)
your call center.

Lizzie O'Shea (37:31)
call center, whatever it may be, that every business needs to integrate this as quickly as possible. it's, if you're not, you're backward and you're going to be holding us back. So resisting that pressure, I think is not just, ⁓ not just smart because of the risks. I actually just think it's good for business practice.

I do think slow and steady, notwithstanding the immense pressure ⁓ at a federal and a political level is the way to go because ⁓ you will save money in the long term. I I think about this in law where...

we're routinely told that our profession is going to be automated away. And I'd be very surprised if that occurs. Now it does present a whole range of different challenges that we didn't have previously about how you do entry level work, how you train people to become good lawyers in contexts where some of their work may be automated. the reality is for us at least, is we've been using automation in our work for the last 10 years, at least. And ⁓ those kinds of tried and true methods of automation have been very valuable.

One of the ways in which systems perform better is if they have timely feedback to the person who needs to receive it. So I think that is the big challenge to resist that pressure to adopt ⁓ unthinkingly and then to set up systems of feedback, including I would argue what needs to be deep engagement with employees to ensure there's timely

feedback to deliver it to the person who needs to hear it.

Dr Miah Hammond-Errey (38:58)
Yeah, very, very good advice. I want to go to a segment on

Lizzie O'Shea (39:00)
I mean, I don't love getting

feedback from everyone. So it's not always easy. It is a gift. I do actually love it. you want to get ⁓ honest feedback. You want to create the circumstances where people can be honest. And it is particularly challenging if they work directly for you. But with something like AI, think that's...

Dr Miah Hammond-Errey (39:05)
Feedback's a gift, Lizzie.

Lizzie O'Shea (39:20)
That's where you do want to treat it like a gift, like incentivize it and encourage it in a way. Like it's not about you personally, it's about a system we're using. Well, you want to get as much feedback as possible so that it works well.

Dr Miah Hammond-Errey (39:31)
I mean, I think you've raised something really important.

though, and it is the perception that if you're not running towards a technology, that you're somehow backward and resistant. And I don't quite know how we shift that. But I think it's really important. Because if we don't center the things about being human leaders at the center of making these decisions, if we don't kind of focus on what makes great leadership great, then we risk outsourcing parts of the work that are actually really crucial to having

good healthy workplaces that are productive.

Lizzie O'Shea (40:03)
I think it's finding ways to tell a good story about how you're being careful with it because that is, I think, a good story for clients or for people who use your business. I think government has a role to play here. I government feels under this huge pressure as well, but ⁓ I think government's got to be careful about blindly adopting technology to look like they're automating things.

without doing it carefully. And I would encourage anyone in government to just hold the line in terms of thinking about ways to do it carefully considered and in ways that generate feedback, right?

Dr Miah Hammond-Errey (40:38)
Yeah, absolutely.

What alliances do you think will be most important in the legal technology and security space in the next few years?

Lizzie O'Shea (40:44)
Well, one thing I am really interested in, and again, maybe this is something that is perhaps of interest to your listeners. One of the big issues in AI that is often not spoken about ⁓ from the human rights side at least, or from civil society organizations is the use of AI in warfare. And that's because it feels like it's something that is perhaps too big or it's perhaps beyond our capability. We also don't have the necessary

credibility to talk about the grave challenges that come with using AI and warfare, So, AI means many things, but I suppose automated technologies for things like target generation, like we've seen it recently in places like Gaza, but also in the Ukraine, where some of these more advanced systems of warfare are being used for all sorts of

different purposes. But there's this instinctive assumption that it will improve somehow warfare.

either to make states more powerful against ⁓ enemies, but also to lower the human suffering in war. And I just, not sure that that's true at all. What is the

status of the laws of war in this context? Does international humanitarian law have a role to play in these contexts?

I would be interested in trying to build out alliances in that respect so that we can have technology that is not accelerating human suffering, especially in the context of war, if we can avoid it.

Dr Miah Hammond-Errey (42:13)
it's happening in a global environment where multilateralism is struggling in decline. So this is occurring at a point when we're likely to see more state and interstate violence at the same time as when the institutions are actually not as able to effectively engage or counter that. ⁓ And so yeah, I absolutely agree. It's critical. I hope to have some more of that conversation ⁓ coming to the pod.

Lizzie O'Shea (42:19)
Yeah. Yeah.

Dr Miah Hammond-Errey (42:44)
One of my segments is called Eyes and Ears. What have you been reading, listening to or watching lately that might be of interest to my audience?

Lizzie O'Shea (42:50)
I'm just reading Empire of AI by Karen Howe because she's coming out to Australia. And I think the analogy of large technology companies, and particularly AI companies, OpenAI, having an analogy to Empire, I think is a very good one. That's the framing of the book, of course, and it's what gives rise to the title. But it is the most effective way, I think, of understanding how they operate as

something between almost a state and a corporation, the same ambitions of a state in some ways, or an expansion of state coupled with the incentives and motivations of a corporation. So it's very detailed, full forensic history of the company gives you a very decent insight into how it works and should make everyone feel quite uncomfortable.

⁓ And so that's the right book to read, especially if you're an enthusiast for chat GPT.

Dr Miah Hammond-Errey (43:46)
I'm going go to a segment called disconnect. How do you wind down and disconnect?

Lizzie O'Shea (43:51)
⁓ man, I was thinking about this. Miah, I have like almost zero answer for you. I'm an extremely bad gardener, I suppose is what I would say. I like doing it with my toddler. And the reason why I think it's relevant is I like being in the natural world. You have to see how things actually grow, not just in ⁓ ones and zeros, but also like the whole point of gardening is everything is interdependent. You know, if you don't have good soil, you can't have good plants. If you don't have good plants, don't attract the right pollinators.

Dr Miah Hammond-Errey (44:19)
Final question is need to know is there anything I didn't ask you that would have been great to cover?

Lizzie O'Shea (44:24)
if anyone's listening who wants to work with us, I would just encourage you to get in touch with Digital Rights Watch. our mission is for advocating for fairness and fundamental rights in the digital age, but, It's really a challenge holding these large technology companies to account. And there are so many ways in which they're shaping our society. we're up for building alliances with people. And I think that's open to all your listeners as well. And it's just such a privilege to be able to talk to them through your podcast. So thank you so much for having me on.

Dr Miah Hammond-Errey (44:56)
Yeah, thank you so much for joining me. was a real pleasure.