Technology and Security (TS)

In this episode of Technology and Security, Dr Miah Hammond-Errey speaks with Meredith Whittaker, president of Signal. The interview explores key contemporary issues in technology and Artificial Intelligence (AI). They discuss the impact of AI in elections and democracies, including the need for stronger local media ecosystems and improved focus on the ‘mediating’ role of social media platforms and the information ecosystem. They discuss the concentration of AI power and reliance of the business model on mass collection, including the need to write the tech stack for privacy, not surveillance.
 
This episode also explores developing democratically focused public digital infrastructure without profit incentives and highlights the role of open-source libraries and systems as part of the core infrastructure of the technology ecosystem. This episode also covers the significance of autonomy and agency in neurotech applications. They discuss how to improve tech board governance, through increased personal liability, accountability and transparency. Also, how many downloads signal has actually had! Meredith Whittaker is the president of Signal Foundation. She has nearly 20 years of experience in the tech industry, academia, and government and co-founded the AI Now Institute.  
 
Resources mentioned in the recording:
 
·               Meredith Whittaker, link to talk
·               Meredith Whittaker, link to reading 
·               Meredith Whittaker, link to watching  
·               Meredith Whittaker, link to listening 
·               Miah Hammond-Errey, 2024, Big Data, Emerging Technologies and Intelligence: National Security Disrupted, Routledge (20% discount code for book AFL04)
·               Byte-sized diplomacy (column), The Interpreter, 3 July 2024, AI-enabled elections or deepfake democracy?
 
This podcast was recorded on the lands of the Gadigal people, and we pay our respects to their Elders past, present and emerging. We acknowledge their continuing connection to land, sea and community, and extend that respect to all Aboriginal and Torres Strait Islander people.
 
Thanks to the talents of those involved. Music by Dr Paul Mac and production by Elliott Brennan. 

What is Technology and Security (TS)?

Technology and Security (TS) explores the intersections of emerging technologies and security. It is hosted by Dr Miah Hammond-Errey. Each month, experts in technology and security join Miah to discuss pressing issues, policy debates, international developments, and share leadership and career advice.

Miah’s Twitter: https://twitter.com/Miah_HE
Contact Miah: https://miahhe.com

Dr Miah Hammond-Errey: My guest today is Meredith Whitaker.

Dr Miah Hammond-Errey: Meredith Whittaker is the president of Signal Foundation. She has nearly 20 years of experience in the tech industry, academia and government. She co-founded the AI Now Institute, was the Minderoo research professor at New York University, and spent over a decade at Google leading product and engineering teams. Her work has helped shape global AI policy and shift public narratives on AI. She's provided advice to many governments globally, including on AI, to the chair of the US Federal Trade Commission. I'm really thrilled to have you join the podcast. Meredith.

Meredith Whittaker: Yeah, it's great to be here. Thank you.

Dr Miah Hammond-Errey: We're coming to you today from the lands of the Gadigal people. I pay my respects to their elders, past, present and emerging, and acknowledge their continuing connection to land, sea and community. I extend that respect to all Aboriginal and Torres Strait Islander people.

Dr Miah Hammond-Errey: In 2019 at Falling wells, you made the case that I demands collective action. five years later, where do you think we're at?

Meredith Whittaker: I think to go back to the the case that I made, which I still stand behind, you know, part of what I was pointing out is there is, you know, I is a. A tool that benefits a small number of people, but often at the expense of the majority. So we're in a situation now where the the entities that have the resources needed to create AI, which is this mass amounts of data, this extremely expensive, computational infrastructure and the scale and market reach to both collect and create data and use AI or sell AI across vast populations. Those resources are pooled in the hands of a handful of companies. And so this is a you know, this is a tool that can really only be produced and deployed from start to finish by these companies. They're largely jurisdictions or all jurisdiction in the US and China. And that is incredibly expensive to create and use. It costs a huge amount of money to train an AI model. We're talking $100 million, more than $100 million. running an AI model, doing inference, is also much, much more expensive than sort of traditional information retrieval.

Meredith Whittaker: So when I talk about collective action, it's really, you know, what forms of. Of power could, you know, suffice to push back on harmful uses of this technology, on surveillance, on manipulation, on, you know, disinformation. On the other the other harms here and collective action is, you know, a traditional form of power building among those who, you know, have less power to collectively, you know, gather what they have and resist, resist some of these, these forces of inequality.

Dr Miah Hammond-Errey: There's been so much discussion this year, particularly about the threats of AI to elections. And I wanted to get your thoughts.

Meredith Whittaker: Well, yeah, I think, rightly so. the threat of deepfakes, disinformation, all of this is real. I think oftentimes these threats sort of miss miss the forest for the trees. And, you know, they often focus a lot on how are we going to validate the provenance of a given image or utterance? How are we going to tell if an image or video is real or created with an AI deepfake technology? Those are not unimportant questions. But of course, we don't just stumble across a deepfake on the street when we're walking. We access our information generally our media, our news, our hot takes via social media platforms that themselves are some of the biggest users of quote unquote AI that we know were the pioneers, to use a settler colonial term. Pardon me, but were some of the first to recognize, in this current AI era, the utility of these old techniques and to use them for optimizing their engagement algorithms, optimizing their ad serving. So I think we need to go back to the source and recognize that, you know, there's a lot that we can do by focusing on these social media platforms, focusing on the way they surface and serve content, the way they are optimizing their engagement functions, the way they're actually using AI to keep us hooked or to determine what content is is going viral or what content might be suppressed. And that, you know, we need to see those as mediators that deserve a lot more attention in this conversation about AI, elections and the validity of our information ecosystem.

Dr Miah Hammond-Errey: Yeah, absolutely. Would you characterize that as the biggest issue then for democracy and AI?

Meredith Whittaker: Well, I mean, if we want to pan out and talk about democracy, I think there's a lot more to talk about than simply AI. Yeah, I think the fact that we have five social media platforms that effectively control our information environment globally, that four of those five are jurisdictioned in the US and that, you know, we are in relation to those atrophying our traditional media ecosystem. Local news has been, you know, all but totally killed. What passes for local television in the US is owned by one conglomerate, the Sinclair Broadcasting Network. That sort of radiates often far right wing talking points across that ecosystem. So it appears to be local, but in fact, it is sort of a coordinated ecosystem the business model for media has been all but decimated by these platforms. So that is, you know, I think that needs to be front and center in conversations around democracy, as do things like campaign financing, which in the US is a huge issue. We lifted effectively all barriers to campaign financing, meaning that the people who are getting elected to federal office in particular, have to appeal to those who have billions of dollars if they want the resources they need to compete. That's an anti-democratic step. And then, of course, there is the issue of multinational corporations who in a lot of a lot of ways have. You found end runs around the power of individual states by levering them against each other with promises of jobs or the withdrawal of economic resources, which is also a very anti-democratic force. So, you know, we can't take these things in a vacuum. AI is not causing an erosion of democracy. It is a symptom of a larger set of problems.

Dr Miah Hammond-Errey: I want to ask you about the influence capacity of AI. generative AI and the use of voice assistants particularly seems to lead to our interactions with AI platforms becoming the first and possibly last point of inquiry. So answers are often poorly or unsourced, completely and quite opaque. And whilst it's not in any way perfect, this is quite unlike Google, where at least you can see there's multiple options and you can see where they're being, sponsored. For example, many of our education programs around information literacy include source as a key consideration. And I wondered what you think this might do for society and the capacity of influence.

Meredith Whittaker: Well, I am of course worried. I think if you can look back at the work of Joseph Weizenbaum, who was an early AI researcher at MIT in the 1970s and 60s and was a Holocaust survivor and created a very, very early version of the chatbot. So this is back in the day when this is an incredibly rudimentary tool. Uh, he named the chatbot Eliza. And Eliza was meant to mimic a kind of austere, formal, austere form of therapy. So you would say, you know, you would type into a terminal. This was the Eliza terminal. This is not a voice recognition system. Weizenbaum documents in his book Computer Power and Human Reason, being very disturbed by these interactions, because what he's seeing is a system that everyone involved in, himself included, understands to be very limited. Right. This is not a human being. This is nothing close to consciousness. And yet the way people were interacting with it was as if it was a sentient interlocutor. They were doing a huge amount of work to bridge the gap between a machine that was just a machine, sort of spitting out its prompts, you know, or spitting out its answers via some decision tree system.

Meredith Whittaker: But because it was mirroring an interaction with a sentient interlocutor, conversation with the regular person, people were beginning to relate to it as if it were much more than it was. Now, today, we would not consider that, you know, artificial intelligence. We wouldn't really think about it. But I think that tells us a bit about the problems with these more advanced, you know, systems. Right? They're still not human. They're not conscious. They are, you know, they are sucking in the data that they have and effectively sort of spitting out the average of that information. These probabilistic systems, the LLMS and the, you know, kind of the the types of models that are behind the ChatGPT system and others are not conscious at all. They don't know what they're saying. And alone, they will never be capable of guaranteeing accuracy. What they're guaranteeing is plausibility. The thing that they spits out in response to a given prompt will look similar to something that is accurate, perhaps assuming that the aggregate of data they have been trained on contains some sense of accurate information, but you can never guarantee that. So these are not systems that are suitable for any domain in which accurate information is important, say, medicine or education or law. These are systems that can give you sort of a gesture of a correct looking answer.

Meredith Whittaker: But if there are no sources, if there is no clarity on where that came from, say, are you training this system on Reddit and Wikipedia and other sources that may not actually be correct. That may echo cultural fairy tales that don't actually look like the real history, for instance. Then you're going to be, effectively creating. an epistemic function that is fundamentally misinforming people, and that is training people to trust the reflexive. A reflexive, plausibly shaped sentence without giving them the skills they need to dig in. To require the citations to tell, you know. To discern an incorrect but attractive answer from a robust set of facts. And I think that is, that is incredibly dangerous, particularly since these systems don't give the same answer to the same question every time. So you can have a scenario in which five different people ask a slightly different way, you know, a similar question and get five very different answers, all of them taking those as a given and not looking any further for the facts. And given that these systems are also controlled by only a handful of corporations, that is a very centralized industry. The temptation or the ability to put your finger on the scale and produce, you know, certain kinds of answers and not others can lead to, you know, could lead to a situation in which you have sort of mass disinformation capabilities that are opaque and unaccountable. So, yeah, we should be worried about that.

Dr Miah Hammond-Errey: [01:15:28] I agree, and as you just mentioned earlier, that the challenges in media literacy and information literacy generally, we actually want to be nudging in the other direction, right, giving people more skills, um, and access to the resources to think critically about information.

Dr Miah Hammond-Errey: You've written that surveillance emerged from the very out of the very beginnings of the internet. Do you think it's possible to reimagine a future without it?

Meredith Whittaker: Surveillance, if we think about it as a tool of centralized power has been around for a very long time. We can look back at the dawn of state statistics as a mechanism for governing populations, for understanding how much grain do we have? How many men do we have that might be willing to go to war? How many people do we have to conscript for labor? You know these questions being answered by enumerating and keeping track of people and our shared world. So surveillance is is old. Digital surveillance is a continuum of a lot of those, you know, that that will to power and the way that centralized power constitutes itself largely through information asymmetries. what emerged out of the 1990s was what I would call the the original sins of this surveillance business model,

Meredith Whittaker: Of course, advertising implores us to know our customers, to learn as much about them so we can, you know, target only the ones who might be actually interested in our product or willing to buy something, you know, like housewives between a certain age who do Pilates in a certain neighborhood, that kind of segmentation. So you put in place the foundation for a business model that is still the engine of the tech industry, which is mass surveillance and the use of that data to create classifications of people that you could then sell advertisers and others access to, with the goal of informing slash, manipulating those people and pushing them to make choices, you know, to click on certain ads to, you know, like or dislike certain political positions or what have you. So this was, you know, all of this is kind of, you know, at the root of the issues we are facing today with the monopoly platforms and their outsized control over our world.

Dr Miah Hammond-Errey: So obviously the data economy is vast and poorly regulated. What are your top wish list changes?

Meredith Whittaker: I think there is a need for a pretty radically different technical ecosystem and set of incentives. And, you know, some of the first measures that I think would be necessary are something along the lines of structural separation, which is where we begin to undo the vertical integration of a lot of these companies, separating to be a bit colloquial about it, the infrastructure layer from the application layer. I wouldn't propose that we undo the globally interconnected computational infrastructure that allows us to quickly send an email between Manhattan and Sydney, like, you know, that is very useful, and it's part of the nervous system of our, our current world. But having only a handful of companies control the platforms, the services, the affordances on top of that is a huge problem. So we need to solve that. I think we also need to significantly walk back the types of surveillance that are at the heart of these businesses. So, you know, one thing that I have talked about repeatedly is the fact that, you know, a very moderate reading of GDPR could support banning surveillance advertising. And doing that would cut off the flows of personal data that are feeding these systems, that are feeding the authority. We're giving these companies to define us and our world and to classify our place in it, and would also significantly cut back on this data hungry, surveillance hungry AI industry that is supercharging this paradigm right now.

Meredith Whittaker: I think we also need to create robust digital public infrastructure on the side. What does it look like to more democratically construct and govern technologies that actually meet our needs? What is a local slash global approach to developing applications that meet the needs of different communities? What is what is localized data creation practices look like? And I you know, I take some inspiration from, you know, some of the work that's happening in New Zealand with the indigenous data sovereignty efforts that are looking at, you know, how do we negotiate what types of information about our practices and our culture are shared with AI companies are perhaps extracted by these tech companies, and how do we do that in a way that we feel has integrity and respects that legacy? So, you know, all of these questions need to be sort of answered in parallel, and we need to bring a lot more people in to answer these questions as well. It shouldn't just be folks like me who come from tech. I think we need, you know, we need people who have other expertise and and other visions to share.

Dr Miah Hammond-Errey: I'm going to jump to concentration of technology and power and my own research and book findings echo your work outlining the concentration of power in technology in terms of data, compute and talent, commercially and geographically. What are you thinking about most in this space?

Meredith Whittaker: The concentration of power in the AI industry is something that I've worked on with the AI now Institute for a number of years as really the central concern. Of AI right now. These technologies are fallible. And the fact that they are controlled and governed by only a handful of corporations whose goals are ultimately profit and growth is, you know, a a central concern of mine around their, you know, mass dissemination. with concentration of power comes an incredibly attractive target for political influence and manipulation.

Meredith Whittaker: I think we should be really concerned about the potential for capture and for political weaponization that exists because of this centralization.

Dr Miah Hammond-Errey: If there were any changes you could make to US corporate governance to address tech company issues, what would they be?

Meredith Whittaker: Oh, God. That's a that is a great question and I wish I had a lot longer to answer it. Um, I think. You know, I think liability for directors and executives personal liability is always helpful because, you know, no one, no one wants to get that on them. Uh, I think, you know, I think a lot, a lot and this isn't necessarily at the boardroom level, but a lot more transparency. You know, we should not be taking these companies words. We should not be. Satisfied with voluntary commitments. Given the track record here, and we need a lot more documentation of the rationale for decision making and accountability for the decisions made. I think we also need, you know, forms of auditing and scrutiny that actually dig into the marketing claims, particularly around these AI systems, and compare that reality with what is being claimed. Yeah. Um, and, you know, right now we have, you know, little to none of this. Um, and then I think there needs to be real accountability for, you know, making false claims. And, you know, some of this, you know, the FTC does have their, you know, some powers over deceptive advertising, but they're they're fairly weak in terms of the remedy there. So, you know, I think I think there's a ways to go here. And I you know, I want a caveat around this sort of scrutiny and transparency that there's a real danger there also of audit washing. So you have say, you know, PricewaterhouseCoopers come in with a checklist. And then once that checklist is filled out, they say, you know, everything's fine. But, you know, ultimately that's an exercise in, you know, in weak compliance, not an exercise in the kind of accountability that leads to more democratic decision making.

Dr Miah Hammond-Errey: You've spoken a lot about the nexus between gender and big tech. I was just going to quickly cover three issues and ask for your reflections on them. the first is entrenched racial and gender bias in training data. The second is the disproportionate harms and silencing of voices experienced by many groups, but particularly women, non-white, indigenous, and gay and lesbian people. And the fact that more women are punished and pushed out of tech company roles for challenging the status quo. I wondered if you could share some of your recent reflections on that.

Meredith Whittaker: I think we're not going in a great direction. there have been a lot of efforts for a number of years to. Address these issues. And over the past year or so we've seen. A really reactionary cultural shift in Silicon Valley. And I think generally, um, you know, directly targeting DEI initiatives and sort of directly. Almost a revanchist moment of, you know, directly targeting, um, you know, racial and gender equity. And I think that is, you know, you're seeing that kind of. I think, uh, far right or, you know, um, uh, culture spread across Silicon Valley very quickly. I think there's a lot of people who are sort of celebrating the end or the, you know, sort of fall of many of those, those efforts. And it's, you know, I think it's very dangerous right now. Um, because we're at a moment where we are seeing the a sense of authoritarianism globally. And, you know, we do have systems that are, you know, opaque, making significant decisions, you know, taking more and more epistemic authority. Um, on and by which I mean the authority to define our world. And what we know about it is in the hands of a handful of corporations, you know, via their AI systems and models. These are not things that are sort of, you know, moving that away from more, um, grounded processes in ways that I think could be used for, you know, forms of mass social control that are, that are hard to imagine, but that we really must focus on and push back against if we want to, you know, retain a livable world, particularly given the, you know, the climate inflection points that continue to pass us by.

Dr Miah Hammond-Errey: I'm going to go to a segment. what are some of the interdependencies and vulnerabilities in the technology ecosystem that you wish were better understood?

Meredith Whittaker: Oh, wow. just taking that question of vulnerabilities. Anyone who's worked in tech for any amount of time will recognize that, you know, our our networked computation is sort of infrastructure on which all the rest relies, is built on open source libraries and systems that are often poorly maintained, that are often maintained by volunteer labor, and that are core to the functioning of all of the rest, but in many cases in deep disrepair in ways that do present, you know, significant vulnerabilities to all the rest of it. I could name a number of probably no one would ever have heard of them because they're not, you know, top of mind. They aren't what we think of when we think about tech, but the need to actually resource this core infrastructure to not simply extract and rent seek on top of it is absolutely crucial. And, you know, I think, you know, signal is in fact. A representative of of one of these core infrastructures. We are a non-profit. we're a non-profit because we believe that if we were going to put profit as our core objective, we would then, you know, be tempted or coerced or otherwise, you know, pushed into some form of monetizing data collection and creation, because there isn't really another business model for the type of thing we do. So we are a nonprofit, and as a nonprofit we rely on donations. It costs us $50 million a year about to, you know, to function.

Meredith Whittaker: And that's forever. That's that's the cost of maintaining and building these systems, it's a job that has to be done over and over again every day. That's the cost of bandwidth and infrastructure. I think, that little vignette offers both a insight into just how profitable this business model is. And into the fact that we need real capital if we're going to actually develop these alternatives. You know, throwing a couple of hundred thousand dollars at a proof of concept isn't going to build a new, more healthy internet, right? We actually need to take this seriously. It's not you know, we aren't out of ideas. We have plenty of ideas. This isn't a question of having not thought of the right thing. It's a question of resources. It's a question of capital. And it's a question of, how do we push back on the pathologies of the current business model while resourcing alternatives that will actually lead to better futures?

Dr Miah Hammond-Errey: signal is an encrypted messaging app and it will be, well known to my listeners. It's used by about 40 million people globally and oh, oh, oh, please give me your new stat.

Meredith Whittaker: We don't give it publicly, but if you look at, um, I don't know where 40 million came from. It's it floats around. But anyway, to be a pedant about it, if you look at the Play Store, you can see that it's been downloaded over 100 million times, and that number is actually close to 200 million. When you look at the dashboard. And that's just one platform.

Dr Miah Hammond-Errey: let's just say signal is generally considered the go to app for private communications and used by many millions of people around the world.

Dr Miah Hammond-Errey: Going to a segment now emerging tech for emerging leaders. What do you see as a key priority for leaders in the current technology environment?

Meredith Whittaker: think you need to kick the tires. I think there's a willingness to believe the hype because it's delivered from other powerful people and a constant fear that you're behind the ball, that, oh my God, we need an AI strategy. Oh, well, it looks like an AI strategy is the thing that everyone's talking about. Because, you know, it's the it's the hot new thing and we have no idea what AI is. We have no idea how it could actually be useful to our institution or business, but we feel pressure from the herd. And so we're going to go adopt it. And I think there is a need to, you know. Take humility seriously as a leadership attribute. When you are in a position of leadership, you are never going to be the person who knows the most about any given topic. And so really asking the fundamental questions, okay, well, you say we need I. What is it? What does it do for us? How much does it cost? How much of our data does it expose to third parties who could misuse it? How do we validate that it's actually useful for our use case? What are the benchmarks and evaluations that we conduct to do that? And I think even asking what seem like really basic questions would set most leaders up to make much, much better choices and probably much more profitable choices in the long run than just, you know, diving head first into something that you know is is likely to not be the one stop shop that solves all problems, that is being sold right now. And, you know, we can think back two years when you know everyone was, you know, trying to figure out what to do with the blockchain, right? And you had some pretty absurd implementations that have gone nowhere and wasted a lot of time and a lot of money and a lot of servers. So maybe we can avoid that this hype cycle.

Dr Miah Hammond-Errey: going to a segment called Eyes and Ears. What have you been reading, listening to, or watching lately that might be of interest to our audience?

Meredith Whittaker: Well, just last night I was reading a report from the Quincy Institute on the amount of private capital that is going into military tech.

Meredith Whittaker: I'm listening to I was listening to an Oscar Peterson live in Tokyo, uh, which is this really beautiful.

Meredith Whittaker: I'm living in Paris right now and I don't speak French, but I'm trying, which is just an exercise in daily humility, honestly. So I'm watching this show called Marseille, which is just a so trashy. It's really it's good, it's trashy. But I'm trying to practice my French. And, you know, I've got the swear words down.

Dr Miah Hammond-Errey: In what ways do you think potential conflict with the US and China is shifting the tech policy debate in the US?

Meredith Whittaker: Well, it is central to a lot of the tech policy debate. Um, and I think in some sense, a lot of people, a lot of interested parties in that debate have been stoking that narrative. Uh, and this is something we wrote about a number of years ago. But, you know, the idea that if the US puts any guardrails, any regulations, any restrictions on the unfettered surveillance and development of AI and other systems it will lose to China is doing a lot of work to, I believe, promote unsafe technologies and to. Brush away fundamental questions about their efficacy, about, you know, the contexts of use and about, you know, their appropriateness for certain domains. So it's used to push back on some of the most urgent questions that are being raised in more democratic contexts and say that, you know, if we spend any time dealing with these, we are losing to China. Now the question then becomes like, what does winning mean? Do we want to win a race to, you know, an authoritarian surveillance state, or are there perhaps other goals that we could shoot for?

Dr Miah Hammond-Errey: Has your current role as president of signal changed your perspective on global technology and policy in any way?

Meredith Whittaker:Well, it's certainly given me a really clear and privileged lens into what it means to be doing the day to day running of an organisation that has to build and maintain these systems come rain or come shine. I am certainly constantly aware in a different way of just how much of the ecosystem on which signal and, you know, any other technological technology and organization depends. And how or how much of that ecosystem has been shaped by the imperatives of the surveillance business model and how significant the control of the the large set corporations and the gatekeepers are over this ecosystem like we are, you know, signal is constantly having to work around norms and paradigms that assume the collection of as much data as possible. The storage of as much data as possible is the goal. We have to rewrite pieces of the stack to do things privately, where surveillance is assumed to be the norm, and then we have to rely on infrastructures that are provided by these large players and ultimately shaped by them, because it's really not possible in this day and age to provide high availability global tech that needs to work in real time without relying on those. So I think it's it's sensitized me in a new way to just how pervasive those dynamics are. And, and, uh, and how much that legacy shapes every part of tech, even if you're swimming against the grain and working to do things differently.

Dr Miah Hammond-Errey: You co-authored a piece some years ago on the ethical priorities of Neurotech Can you describe the interplays between Neurotech data and AI, and how the concerns you raised in that piece are being responded to?

Meredith Whittaker: I'll just caveat that piece was co-authored with a lot of different authors. I actually gave a talk for Boston University a couple of years ago that looked particularly at issues of agency and, epistemic authority related to Neurotech and dug into Elon Musk's Neuralink as the example. I think there are huge issues with this technology, not least of which is the fact that, you know, we're talking about implanting different, you know, chips and other system, you know, hardware into people's bodies in a lot of contexts. And we already see, you know, assistive technology and, and, you know, brain computer implants that have been used by people abandoned by the companies that make them.

Dr Miah Hammond-Errey: the responsibility element is, yeah, really significant.

Meredith Whittaker: So, you know, operating systems that are no longer updated, companies that go out of business and you just have, you know, some metal or plastic or silicone in your body that is not maintained. So that's, you know, that is just a huge issue there. I think, you know, ultimately what you're talking about is giving. A company, probably. I don't think this is going to be a government institution. The authority. To determine and state what you think and feel. or the ability to determine that to, you know, make claims about who you are, what you think, what you're doing, what you feel at any given moment with more authority than you have. And I think this gets into some pretty frightening territory,

Meredith Whittaker: You're talking about sort of mapping data from a software system onto, you know, into a model and then making claims based on that data that in no way actually reflects someone's consciousness at that moment. But what they do do is displace authority from the individual who is feeling or thinking something to a corporation who is claiming to have sort of scientific authority to make that determination.

Meredith Whittaker: I think it's really dangerous to be ceding that kind of authority to private companies. particularly given the concentrated power of the AI industry and the fact that this is this is going to be technology that will at some level need to be monetized. It's going to be technology that will be in the hands of, you know, a handful of centralized companies just because of the cost of building it and of running it.

Meredith Whittaker: It's the fact that these companies are making claims that we see as plausible to read our minds and define our person with more authority than we have. And I think that is really what's dangerous. And now, you know what happens when they sell that ability to our employers, when everyone, you know, in order to work in a certain industry or for a certain company, you need to be implanted with Neuralink. And then your boss has a, you know, big board of employee sentiment based on your fMRI data. And you are neuroatypical and your data doesn't look anything like the model. And you're having to, you know, make your case that, in fact, you are a focused and good worker. In fact, you are happy here. When the data suddenly has more authority than you to make that claim. And, you know, based on where the market is, based on the fact that, you know, it's usually a corporation has more capital than an individual worker, it's very likely that that's, you know, the kind of use case we would be seeing, not a, you know, not a less profitable use case.

Dr Miah Hammond-Errey: Ee're going to a segment on alliances and multilateral forums. Do you see any areas, particularly where we can work together to create societies that have vibrant and resilient information ecosystems?

Meredith Whittaker: I think there are a lot of opportunity to do that. But we have to, you know, it's going to take. Simultaneously visioning and creating those ecosystems. Barcelona in Spain is a good example of sort of more democratically governed data commons. From digital sovereignty programs that are happening, in some capacity in Europe and elsewhere. Um, we can look at funding kind of core infrastructures outside of, you know, the big tech paradigm. I think we can look at developing smaller, more purpose built AI models or other techniques for assessing and finding patterns in data that we think is useful for governance and understanding our world. And then we can look at, you know, again, community led and determined data creation efforts. Right. So how do we decide on. What we want to know about ourselves and our world. How do we create methodologies to do that in a way that, you know, seems accurate and feels safe and just to people? And then how do we develop governance structures that can actually decide when and how that data gets used and how consent works in that context? Right. So like all of those things need to be happening alongside pushback against this, you know, massive big tech industry, right. Which is not going to simply happen because we thought of a better alternative. Right? This is not a contest of ideas. This is a contest of power. So we need both. We need the pushing back and the building. And those need to happen at once.

Dr Miah Hammond-Errey: What technology brings you the most joy? And how do you wind down and disconnect from technologies?

Meredith Whittaker: Well, I'll use. Ursula Franklin's definition of technology, which is, quote, the way we do things around here. So almost anything can be a technology. I think, you know, the technology of warm water and bathtubs is like one that I really like at the end of the day. I think if we're going to use a more common sense popular definition of technology, which is kind of like anything that touches a computer at this point, um, I really like Tetris. I think Tetris is like just an incredibly relaxing game. And in fact, I was the. E Tetris app on Android Champion before they killed that app for many years. So I, um, you know, I also like winning. So that was fun. Um, and how I wind down, I think. Well, I do, I do yoga every day. Um, so I guess that's a pretty, pretty basic but true answer.

Dr Miah Hammond-Errey: So if if listeners want to imagine a different and positive future that kind of reflects the values that they and, you know, many Australians have, what do you think is something that they can concretely do to move us away from your big tech controlled world?

Meredith Whittaker: This is a hard question. You know, again, this is why I focus on collective action in 2019 and not individual choice, because I think in a lot of ways, these aren't matters of individual choice or systemic. I think those are, you know, everyone will have a place of intervention, but that place will probably be, you know, almost certainly be systemic. It will be, you know, pushing our institutions to do better, making sure that our workplace is answering questions about data privacy before adopting some new app that our schools are thinking of children's privacy. there are always these, you know, places within our context and our community where we can push back. I think, you know, that's a that is local, but that is also going to require coordination and some collective action. And it will require recognizing that oftentimes we individually aren't making these decisions. Our institutions and governments are making those for us.

Dr Miah Hammond-Errey: Meredith, thank you so much for sharing your thoughts with me.

Meredith Whittaker: Yeah, it was great to be here. Thank you.

Dr Miah Hammond-Errey: Thanks for listening to Technology and Security. I've been your host, Dr. Miah Hammond-Errey. If there was a moment you enjoyed today or a question you have about the show, feel free to Tweet me @miah_he or send an email to the address in the show notes. You can find out more about the work we do on our website, also linked in the show notes. We hope you enjoyed this episode and we'll see you soon.