Pondering AI

Dr. Christina Jayne Colclough reflects on AI Regulations at Work.

In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.

Creators & Guests

Host
Kimberly Nevala
Strategic advisor at SAS
Guest
Dr. Christina Jayne Colclough
Founder of The Why Not Lab

What is Pondering AI?

How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.

KIMBERLY NEVALA: Welcome to Day 12 of Insights and Intuitions with Pondering AI. In this final episode, Dr. Christina Colclough reflects on AI at work. Welcome back, Christina.

CHRISTINA COLCLOUGH: Thank you so much. Great to be here.

KIMBERLY NEVALA: All right, so you and I talked back in April. What have been the most significant developments, positive or perplexing, in the future of workspace since we last talked?

CHRISTINA COLCLOUGH: Well, I think we can't avoid talking about generative AI, can we? So that's something that has been a development. But I would also say of course, there's a lot happening on the policy front.

But let's take generative AI first. I mean, to be honest with you, I'm quite scared of the speed of the rollout of all of this and very laissez faire attitude everybody who seems to be jumping on board loving it very uncritically.

We have to look at the power extraction that's taking place right now. We have to look at the possible and big-scale effects this is going to have on the labor market. We've seen the strikes in Hollywood and I think that's only going to be a beginning of all sorts of discontent, which rightly should be there.

So really, the speed has been alarming, the void that it has been cast out into our world, into this void, regulatory void. But also the lack of competencies, of course, amongst the population about what's the pitfalls of these technologies. And here, I'm thinking about not only this power extraction, but also something that I call digital colonialism, i.e. the expansion of particular norms and values across the world.

So to be honest with you, Kimberly, that's been my biggest concern. And also that there seems to be no counterweight to this. And I don't mean that we should, therefore, reject all technology, but more something I call disruptions obligations: i.e., If you start as a filmmaker using AI systems or generative AI to do the script, what should be your obligation to those you're disrupting? What should be your obligation to wider society in the form of what culture do we want, what society do we want?
So disruptions obligations I think are going to be the next big thing, and I think we're fortunate that SAG-AFTRA have taken the strike very seriously to pose precisely that question.

And then on the policy side, if I can continue…

KIMBERLY NEVALA: Yes, please.

CHRISTINA COLCLOUGH: Lots going on both in Europe, but also in the US and around the world an acknowledgment of the need to regulate these technologies.

But unfortunately, most of those discussions fall very short of really putting action behind words of saving fundamental rights and so on. So I'm very busy there trying to put some sense and sensibility into what is a market-based regulation.

KIMBERLY NEVALA: Yeah, it's been interesting. I just spoke with Henrik. And he made the point that the EU AI Act has been really stretched out due to the fact that generative AI and large language models in particular have come on the scene so fast. And suddenly there was this scramble to try to redevelop or to extend rules.

It spawned this thought that our approach to this is a bit brittle. In that if we are hoping that at any point in time we have a regulation that speaks to, in a rules-based way, the current state of the art of the technology - and we cannot implement any regulation until such time as that is the case - we will never, in fact, have regulation.

CHRISTINA COLCLOUGH: Right. But I think as well, there's other ways we could go about this. Because yes, generative AI is a new spin on the AI wagon, but there are some essentials here. We have data, we have algorithmic profiling or inferencing which is similar across all technologies be it deep learning, machine learning, AI, or generative AI.

And if we want to have regulation that can last the next technological development, let's look at the core of this. Let's look at that here and now, what are the immediate effects? All actors are fired or all screenwriters are fired, whatever. But also, what are the future effects of these algorithmic inferencing and that form of manipulation that occurs?

So I think yes, the European Parliament took a brave move, and all the European Union, really, in incorporating generative AI. But I still think they could have turned things upside-down and said, OK, we have the GDPR. Has its faults, but it's there. How do we now regulate for these here and now and long-term effects of automation as well as quantification? If they had built their regulation around that, I think we would stand the course of time.

KIMBERLY NEVALA: You mentioned disruptive obligation. Again interesting because we tend to look back at history and say, hey, this has all happened before. And yeah, it was rough, but we all got through it.

We seem strangely sanguine about saying, this is going to be really painful for certain people. Although, most likely the people saying this don't tend to be the people this is going to be painful for. Without then saying: shouldn't we learn from history and proactively design something to avoid that circumstance and that pain for all of these folks? Is that the core of what you're talking about disruptive obligation?
CHRISTINA COLCLOUGH: No, I mean, it's a great perspective, Kimberly, and in many ways I agree with you, and then some I don't. Because yes, there's been technological change ever since the invention of mankind, so to speak. I think this time is different, though.

It is different because you have those two aspects of digital technology. You have the automation part, but also the quantification part-- i.e., turning our actions and non-actions into quantifiable measurements, which then become the “truth” about us. And based on those inferences and those profiling, we get sold certain goods or we can't get insurance or mortgage or a job or whatever it may be. So this time, it's fundamentally different. It's also hitting across all sectors and all professions.

But where you are right, Kimberly, is that before, it was more routine tasks that got automated.
And yet, we still have jobs today across the world, which are just so inhumane. People across the world cleaning sewages, for example. How could we put our efforts into saying, this is not a dignified job. How can we help these people, then, to find other jobs?

Now with generative AI, it's mostly white collar or academics who are going to be severely affected by this besides the cultural workers. So this is going to be different. It's going to hit harder to our own ego in the West, and therefore, the reactions are going to be more felt.

But what you essentially are also saying, and this is disruptions obligations as I call it, is we need to ask: what society do we want? Do we want a society where people get displaced very quickly, where skills become out of date fast? Where maybe the vast majority of the working population are in precarious short-term contracts with no possibility to fund or find courses, skills to help them onwards?
This is very, very individualistic, shoving all responsibility away from those disrupting onto the shoulders of those that are disrupted. Now surely this is not what our societies need.

So I want to ask all companies doing this, what are your obligations to the society in which you are embedded and on which you depend? This is fundamentally what I'm calling for here because we could more or less develop ourselves to hell, and we are well on our way. I don't know if Henrik mentioned, but the environmental impact of these technologies are just absurd. Who's talking about that apart from some very few voices?

KIMBERLY NEVALA: Absolutely. So all of that being said, what is your projection for what we may see unfold as we turn the corner here into the new year?

CHRISTINA COLCLOUGH: Trouble. To be honest with you, trouble. Because our politicians are adopting the wrong regulatory approach in many ways. They say they want to protect fundamental rights, but I don't really see them doing this. They're still going for this narrative-- oh, America, China, Europe, who's going to fall behind? Who needs to run in front? But it's all about innovation.

And then you hear the next thing, regulation stifles innovation. So therefore, we have to rush, rush, rush and not regulate too much, but regulate some. Instead of saying, hang on a sec. Data. AI inferencing. Quantification. How do we ensure Article I of the Universal Declaration of Human Rights: that every human being is born free and equal in dignity and rights? How do we ensure that in a world where we constantly are algorithmically manipulated?

Now I wish they had started there rather than starting on this whole market thing.

KIMBERLY NEVALA: Well, it is a bit more of a snarly question.

So if you were directing the agenda, where would you have us focus? You may have spoken to this some already, but reiterate for us: where would you have us focus our attention in the new year?

CHRISTINA COLCLOUGH: Yeah, this is a personal thing for me. I think in the new year, and in the years to come, the many years to come, the pertinent thing is what I call inclusive governance.

That is, OK, yes, these technologies have vast potential and our world has so many problems that we could possibly address using technology. But to do this in a way that does not cause harm to the already marginalized. Or that does not lead to some hegemonic transfer of one particular country's values to the rest of the world, et cetera. To do that, we need to listen to the subjects of these systems. So if you're unemployed and there's an algorithmic system that's matching you with jobs. You or an organization that represents you should be heard on how well is this working. Only in that way will we be able to find out, is this tool actually discriminating against particular groups or age groups or whatever it may be?

So I really, really say, no matter what technology, if it's AI, if it's data-driven, if it's machine learning, deep learning, whatever it may be: all these systems must be inclusively governed.

Now, I can hear a lot of people say, oh, that sounds bureaucratic. That sounds like it's going to take time. But this is precisely the thing that we have to say: yes, we can tap into the potentials of these technologies but it will take time to ensure that no one is harmed. And if harms are caused, that these can be rectified.

So that whole saying, inclusive governance, is against this power extraction that we're witnessing right now into the hands of a very few, very powerful companies which are beyond the reach of any form of democracy. So inclusive governance, strengthen the democratic control and ownership over these systems, and then really, really co-determine to what use they should be put.

KIMBERLY NEVALA: I feel like we should take a minute and pause on that because it's a really deep, important point. It also struck me, though, that we have to rethink what we mean by innovation. Because right now, innovation seems to be entirely poised on the point of the right now, if not yesterday, pin. As opposed to a view of innovation that says, at the right time, which could mean slow and steady as well.

CHRISTINA COLCLOUGH: Yes. At the right time for the right purpose.

At the moment, the only purpose is to increase market shareholder value. This is not the purpose that our planet needs, neither environmentally, socially, geopolitically at all. So yes, this is fundamentally saying, what are we producing or innovating for?

Now to be honest with you, I know a lot of people are going to think, oh, she's so naive sitting there saying that because you can't stop technological development. And probably not. But still, you can regulate for technological development. Which, as we spoke about earlier, is being attempted more or less rightly.

But I think we are at a crossroads in time. Our planet is burning. We have a lot of people who are exploited. We have a potential danger of these tools which is beyond the imagination of many. So now we could say: OK, the post-war GDP, is that the right measure for the good of our societies? Is this really what we need now? Or do we need to somehow reformulate the market, in inverted commas, so that companies are rewarded for other types of behavior than necessary economic profit?

KIMBERLY NEVALA: On that note, thank you so much.

As always, Christina cuts straight to the heart of the matter. She also brings our inaugural 12 Days of Pondering AI to an incisive close. Pondering AI will be back in our usual format after a short break to ring in the new year. Subscribe now so you don't miss it.