Pondering AI

Kati Walcott differentiates simulated will from genuine intent, data sharing from data surrender, and agents from agency in a quest to ensure digital sovereignty for all.

Kati and Kimberly discuss her journey from molecular genetics to AI engineering; the evolution of an intention economy built on simulated will; the provider ecosystem and monetization as a motive; capturing genuine intent; non-benign aspects of personalization; how a single bad data point can be a health hazard; the 3 styles of digital data; data sharing vs. data surrender; whether digital society represents reality; restoring authorship over our digital selves; pivoting from convenience to governance; why AI is only accountable when your will is enforced; and the urgent need to disrupt feudal economics in AI.
 
Kati Walcott is the Founder and Chief Technology Officer at Synovient. With over 120 international patents, Kati is a visionary tech inventor, author and leader focused on digital representation, rights and citizenship in the Digital Data Economy.

Related Resources
A transcript of this episode is here.   

Creators and Guests

Host
Kimberly Nevala
Strategic advisor at SAS
Guest
Kati Walcott
Founder & CTO, Synoviant

What is Pondering AI?

How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.

KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala.

In this episode, it is a pleasure to bring you Kati Walcott. Kati is a tech inventor and investor who is currently the Founder and CTO of Synovient. She joins us today to discuss the intention economy, digital representation, and human agency in our increasingly digitized world. So welcome to the show, Kati.

KATI WALCOTT: Thank you so much for having me here, Kimberly.

KIMBERLY NEVALA: Absolutely. It's our pleasure. Now, anyone who looks at your profile won't be surprised to learn that you have published numerous books. You have over a hundred patents. They might not, however, assume out of the box that you started your journey as a molecular geneticist. So tell us, how do you go from molecular genetics to being at the front end of tech today?

KATI WALCOTT: I started studying genetics because I wanted to do something exactly the opposite of what my father wanted me to do. He was an engineer and I grew up underneath him, and he's always been very kind and spent a lot of time teaching me and, from a very early age, really involved me with science and technology.

When we came to this wonderful country and we had the opportunities to study anything I wanted, my father said that I will be an engineer. And I didn't agree with him. We had a lot of disagreements. And because of that, I studied biology and genetics, and it was always interesting to me. But I have to tell you, biology and genetics gave me some of the core competence and interest in taking things apart and putting things back together.

And while it seems that the two areas, the two technology areas, between biology and computer science are orthogonal, there's a lot of similarities. And the opportunities that I have because of the way I think and the way I formulate ideas and I bring them together, there's a lot of things that come out of genetics and biology. Especially as we are moving into the area of AI and starting to create analogs between society and human diversity and AI and the representation in digital space. There are a lot of analogs and a lot of similarities where we can draw the knowledge across.

KIMBERLY NEVALA: Yeah, that's awesome. And I guess your father gets the last laugh on that one as it all comes together.

KATI WALCOTT: I will never say it. I will never say that he was right. But yes, I will give him that. Absolutely.
KIMBERLY NEVALA: That's awesome. Alright, so I originally found your work in an article you had written about the intention economy. And so I'd love to start our conversation talking a little bit about that. For folks that aren't aware of this concept can you provide a definition or a description of what the premise of the intention economy is, as defined by those promoting it today?

KATI WALCOTT: Absolutely. So when I look at what I call digital data economy, there's an evolution on how we got here. And that evolution starts with a very early digitalization of businesses and things that we wanted to improve access to or we wanted to have improvement on speed of access or dispersion of content, dispersion of capability. The intention economy is a version of the digital economy as we have evolved it.

But in my mind, it is not a positive evolution. Intention economy is a reflection of modeled behavior and it is the reflection of what algorithms infer that we do. Not what we as humans actually mean or choose to do or choose to want to do. So within the intention economy is a set of systems that are built around prediction, not permission.

So when you think about your presence in digital space, when you think about what you do when you click or scroll or pause on the screen, that behavior is harvested and labeled as intent. And that intention is reflecting, is creating a mirror of, who you are in digital space. And that information is used to feed optimization loops that serve the system's goal, the AI goal. And that is to engage, to retain the data, and to monetize that data.

But that intention does not reflect human intent. You may have a different intent behind what you do. Just like when you have a conversation and you explain your position to somebody else and they say, well… and have a negative or a different response than what you've intended. As a human being, you back up and say, give me an opportunity to explain again because what you received out of that information is different than what I've intended.

Computer systems are not that kind. They don't have the ability to ask the questions to really understand your intent. They take what you say and what you provide as de facto the answer. So effectively, what we are building here in this intention economy is an economy of simulated will. It is a place where machines that model your choices do a better job in describing what they think that your intent is than the way that you can express them. And, unfortunately, they act on behalf of what they believe that your intent is without your consent and your permission given in order to get to the next step.

KIMBERLY NEVALA: When we think about intent today, and as folks talk about intent and being able to automate intent, it's almost the logical extension of hyper personalization, if you will. And so I think that the average user would be forgiven for assuming that there's more agency and intelligence in that decision making or more orientation around their purpose than there maybe actually is. So anyway, before I get too down with that. Whose intent is reflected then in the intention economy and the way that we're now implementing these systems?

KATI WALCOTT: So there were a number of things that you asked in that question. Let me see if I can break it down a little bit.

So I don't know if there is a who in that question. I think it's more of a set of things that are describing what is occurring and it's not a singular entity that is acting on your behalf. It's a concert of systems and providers that have certain goals, and some of those goals - the main part of that goal - is monetization and the economic value that is being created out of some very, very small parts.

And those small parts are content and data. And I think part of the intention economy and the Aletheia principle is really talking about how do we maintain the idea that genuine intent must remain visible and intact through every layer of that digital transformation. And in most modern systems, the opposite happens.

So the human intent that begins as something authentic, as an act of will or desire or purpose, transforms as it moves through algorithms and APIs or application programming interfaces. Those are the intersections between two systems and how they converse, and data pipelines. These are all transformational places where that intent becomes abstracted and it's reinterpreted. And eventually, it's replaced by something we call machine inference, which is one part, in fact the core part, of AI.

So when I talk about the Aletheia principle, I refer to the truth as unconcealment. It's that management of my intent and your intent and our societal intent that are part of this machine. How is it maintained and how does that get reflected in the actual systems in which that data is constantly transformed? And there are several layers, and each layer hides more than what it reveals. This is why we end up with systems that behave as if they understand us, as if they are representing us through what you called agency.

And I think we're going to have to dive into the concept of agency a little bit more, because there is a disconnection between agents and agency. But as I mentioned, when we have systems that behave as if they understand us, they are operating on decisions and code. That is a statistical ghost of our choices. And the Aletheia principle, this truth as unconcealment principle demands that the architecture itself preserve the intent, that the data, the permissions, the computation remains transparent to that origination or that originating truth.

That truth comes from you and what you intended to do with something. And that unconcealment means you can't hyper focus on one area of it or disregard the whole. So if we don't build this correctly and enable the functions that are needed, we will be building an economy of illusions. And it's going to give us a set of simulations of intention that, through the agents and our current way of representation in this digital world, is masquerading as human will.

KIMBERLY NEVALA: And is it true then that that can, in fact, rather than helping us as individuals - whether we're customers or employees or citizens - engage with the systems in a way and have, as you said, true agency in terms of understanding what it is that we want to have happen, what our actual intent is, and then being able to see that happen, that the way we're approaching this today actually obscures some of that understanding? And could, even as we talk about the ability to personalize, about the ability to automate or create agents that can take action on your behalf, actually suppress choice and suppress our understanding of the information and how it's being used?

KATI WALCOTT: Actually, you posed a very, very interesting question regarding suppression. Because on one hand, we currently don't have the ability and the opportunity to express our choices. So it's not that it's being suppressed. It's not enabled to be expressed so it's not necessarily specifically being eliminated.

And I think we've been trained to think about words like personalization or recommendation as concepts that are benign. But in practice, they describe a set of thoughts or a set of capabilities that describe surveillance and constraint from a perspective of not necessarily what's actively being suppressed, but what's not allowed to be expressed.

So personalization doesn't actually adapt to you. It adapts you to what's most profitable and predictable. And I think that's a very important inflection point, because personalization is narrowing your range of options while telling you it's actually expanding them. But it's expanding them within the constraint of within which you can operate that is monetizable. When you step outside of those boundaries, that data is not considered. So you're only able to operate within a very narrow band. So in that area, the language becomes a camouflage for extraction.

So this is why I believe that architectural honesty matters, and we need to have and there are a lot of legislation that is focusing on elements of this. They believe that systems should name what they do and should not sell these personalization and recommendation capabilities, these agents that supposedly act in your behalf, as empowerment because they are actually eroding your agency.

And there are some very simple things that you can bring up and think through. As an example, one of my personal examples. Just recently, I went to visit my doctor and he measured my weight. And I looked at him and said, that number is wrong. And he said, oh, don't worry about it. We know the scale is bad. But he's still recording that data and not allowing me to use my agency to make a change. And based on that, that information is going to be used in an algorithmic way to figure out my BMI, figure out my health score for my age, which my insurance company is going to use as data to figure out my risk, and therefore, the cost that it would take to insure me. He knows it's a bad data, but he's still recording it according to what's on that scale, instead of personalizing it and allowing me to act as my own agent over my own content and have an opportunity to resolve that error.

KIMBERLY NEVALA: And it's a great example because that's an illustration of how, mindfully, you are aware that the information is being captured, and you're even aware of how it's used downstream, as is, one would hope, the doctor. But yet, we're really just following a process or a procedure. In his mind, I would assume he'd say, no, I know that this is off, and therefore I will take that into consideration. But at some point, not only you've already lost control of that data, but he loses control of that as well.
KATI WALCOTT: That's correct.

KIMBERLY NEVALA: But this is a very mindful circumstance.

Paula Helm had this phrase she called user’s behavioral surplus for the data that we shed in our digital interactions. And I thought that was somewhat brilliant because I don't know that most folks really do understand how much of that behavioral surplus is being left behind. Even though they may willingly provide some of that information in the moment to serve a particular objective that they have, whether it's buying something or getting access. Maybe we even like to see the recommendations because we don't perhaps appreciate that there are other options that we might equally enjoy that we're just not seeing or being exposed to at all. So there's that limitation. But this ability or understanding of you are actually putting information out that, as you say, limits control, and you lose control of it once it's created.

So can you talk a little bit about the limits of both user understanding and user control and what the ramifications of that are.

KATI WALCOTT: Absolutely. You shed data on a daily basis, and there are three styles or types of data that I like to group that in. It's data that you actively create when you pick up your phone and you take a picture. And I have conversations with folks about this when, during Thanksgiving, I have a captive audience in a grocery store standing in line for checkout. I have conversations with people around me so that I can better understand what they think their limits are, or what they believe belongs to them.

When somebody picks up the phone and they take a picture, they believe that that picture is theirs. But they have never read the terms and conditions of that phone and taking that picture. That's the first type of data, which is data that you create and you actively know you're creating.

And then there's data that's being created about you. So when you are going through checkout and you're using your grocery rewards card, that grocery store is collecting your buying behavior and a lot of other things. As they are enabling AI in the grocery store, they're collecting information on how long you were standing in the candy aisle making your life choices over M&M's versus how long you were in the or how quickly you just zipped through the vegetable aisle. So that is all data that they are creating about you.

And then there is something that I call inferred data. Which is putting the two together in order to figure out whether or not you're a good candidate to advertise bananas to you, because what is the likelihood of you getting that? And this type of information, especially the inferred information, got people in trouble.

I remember a case. and it's probably been 10, 15 years ago, when some parents started receiving different types of advertisement for baby formula and other things. And it turns out that the daughter was pregnant. The daughter went to the grocery store and bought a pregnancy test, and then bought another one and another one, another one. The search behavior started and then the advertisements started coming. So the daughter never told the parents that she was pregnant, but the parents found out from advertising.
So what we need to understand out of these three things is that none of these types of data is your possession, because the entity that collects the data and stores that data takes immediate possession. So in this particular case, possession is pretty much 100% of the law. And once the data leaves your possession - and it was never really in your possession because you as a human being don't have the ability to retain it and store it in a digital sense - it stops being yours in every practical sense you can think of. The industry calls data, many data shared, but it's really being surrendered.

And when you look at terms and conditions on websites or applications that you use - or even on different types of regulatory and laws where you are signing off that you are understanding of the data use - they talk about anonymization and de-identification. But there are reasons why you have cookies on your web browsers. There are reasons why you have web 3.0 and AI enabled browsers. Anonymized and de-identified data can get recombined into profiles that describes you and cannot be reversed.

So every personally created data or derivative or inferred data is a new property that is owned by whoever computed it. And in this particular case, even your intent and the data that is describing it, as you externalize it, it's dissolved. So the only way to preserve control and the way that I propose that we look at this problem is to make data self-aware of its permissions. And in order for us to be able to do that, we need to create the technology to make every data object carry its own terms in enforcement logic and revocation mechanism and connect it back to its owner.

Now, the biggest function in this is actually time. And where do we and how do we insert that intent into the process of acquiring and storing and computing data? Because that intention has to come in beforehand. So just like when you purchase a car, you have a negotiation between you and the dealership and you write a contract, then you come up with the terms of payment and then you receive the vehicle. You need to put the same process into the acquisition of data.

And this is the process that I'm building. Building the technologies that makes data sovereign so that the human will and intent does not evaporate at the moment it is digitized. So that we can put the processes in place so that you, as the agent over the data, have the opportunity to maintain control over the data, regardless of where and who is capturing it and is computing and storing it.

KIMBERLY NEVALA: Does that imply then, though, that all of us as individuals who are going through our day and interacting with digital systems everywhere would need to start making conscious decisions and choices? Today a lot of the… it's opt-in, opt-out. And I think a lot of people opt in and opt in for everything, essential plus non-required, because that is the easiest way to get through the process and to do those things. But even there, there's designs that lean into our human psychology. We hit the big green button first and then the little one underneath that's in red or black, most people don't hit that for a variety of reasons.

KATI WALCOTT: Yeah.

KIMBERLY NEVALA: But does this then suggest that we as a collective and as individuals will need to rethink and relearn how we engage with systems and what our own accountabilities are at every level?

KATI WALCOTT: I have to say that I know personally I'm inherently lazy and I don't like reading 4,700-page legal documents that describe the terms of service.

I've done several large research projects where I actually went through a very familiar operating system that is on millions and millions of machines and read the terms of service. I even wrote an AI algorithm that essentially said, these are the things that I want to do. Please read through the terms of service. Can I actually accomplish them? And the answer was no. I didn't have representation, and I didn't have the ability to really have agency over what I wanted to do within the terms of service that was being described.

And then I went one further, one step further. I took those terms of service and enacted the steps that I needed to in order for me to regain my privacy and my agency over that operating system. And essentially, the outcome was that I hobbled the machine. I pushed it back to a version from the early 2000s and even then, I still had 15% of my agency and privacy that was being infiltrated.

So I think what we need to do is it is almost impossible for us as human beings to engage with the digital world. Because it is using a language, versions of code, binary, that we just don't speak natively and we don't really have the capacity to understand. And there are a lot of brilliant engineers that code that don't speak binary. We also don't do that at the speed of light, and the systems that communicate about us and the number of times that these terms and conditions and these relationships are being changed and reforged between us and the systems occurs multiple times a month sometimes. And we don't have the ability to deal with that.

And then the last thing is that we don't have the ability to understand what's occurring outside of our control. And I mentioned that there are three types of data. Inferred data and things that are being collected about us by your refrigerator or by your washing machine or the Samsung TV you're watching or your Alexa device. These are all constant collection, constant connection of data and you don't have the ability to trace and track where it goes and who has it.

What we need to have is digital representation and we need to have sovereignty built with that digital representation, so that now you have the opportunity to create a creation of your digital self within that digital space where you have control of your digital self, but that digital self is representing you and it's giving you choice and opportunity in digital society and digital space that speaks the language, understands the interactions, does it at the speed of light, can trace and track your data through the mechanism I just described.

You need to have that representation. And once we can build that and create trust between you and your representative, that representative then can act on your behalf within the systems that it needs to operate in. So that you have this interaction and you have the trust and you have representatives that can act and work within the construct.

KIMBERLY NEVALA: We're not there today, right? This is a fundamental shift, if I understand it correctly, in how we create what someone might call your digital twin or that digital representation. Because today it is based on some of the information that I might give you explicitly and then a lot of information that's inferred about me.

And as you were talking about walking through your grocery store, I was sort of chuckling to myself because I thought, well, there's one store near me that sees one version of me that's very different than the other. Because I go to one when… I don't cook. I really despise it. But when I don't have actually food in my fridge and I need something quick, that's my default venue. And when I'm actually being good, I'm going to shop, I'm getting the materials to actually put meals together, I go somewhere else. And so their view of me is going to be very, very different. And if they're developing a representation of who I am, my wants, my needs, they will look very, very different, assuming, of course, that they're not, as you said, inferring those.

And so today, because so much of that view is inferred and we have mentioned earlier agents versus agency. We're having a lot of conversations now about the shift from or about being able to develop digital twins of people and for you to have your own digital assistant. You've said this shift from digital assistants to authorization mechanisms is important, and if it's not managed correctly, could be a portent of worrisome things to come.

So in today's world…because what I understood you talking about is a mechanism by which I can decide what profile and what information is going out in the world within my digital representation. And then how that's being used. In today's world, what does a digital twin really represent when we're talking about individuals? And are we ready or what are the ramifications of this shift we're seeing right now where almost all organizations, whether it's my bank or my grocery store or my doctor, are trying to really provide me with quote, unquote, an “assistant” or an “agent” that will transact business on my behalf.

KATI WALCOTT: This is a great question because it goes into a fundamental discussion. Or the beginning of this is a fundamental discussion of digital society and whether or not digital society is representing reality. And then what is our representation within that digital society?

So I think it's important to create a mirror between physical society and digital society in order to begin this exploration. So digital society does not represent reality. It creates a construct and a version of it. Every click or photo or post becomes a data point that is stitching together a digital self that platforms use to decide who you are, of what you want, of what you deserve, of what your needs and requirements are.

The problem is that these synthetic versions are beginning to make decisions in your place because they are representing data, again, without intent. As we talked about they decide insurance rate or credit scores or whether or not you're eligible, when you're not eligible for a job. As you can see, there are countries out there with social scores. And those social scores is a construct of what that particular country believes your digital twin is based on your data. And whether or not it's true or not, again, you don't have the ability to interact with that.

So at this point, we've built a world where this digital reflection is often outranking your human original. And that version or inversion of authority is, in my mind, one of the core sovereignty problems and sovereignty issues we need to address in our era.

But within that digital society, you can look at digital twins as a representation of things, including humans. So the traditional digital twin actually comes from - it's a very old concept. I've seen it in NASA and GE as they were beginning to use the power of systems and information technology to construct very complex systems from some component parts.

And the original concept of a digital twin was the description, having a physical O-ring versus a description of an O-ring, in order for you to be able to create an authentic replica of the space shuttle. So that when you're doing stress testing or destructive testing on the space shuttle you don't have to destroy the original. You can destroy the systematic absolutely 100% or near accurate design and understand the behaviors or the results out of those tests so that you know what the outcome is going to be if that particular event would happen in real world.

So traditional digital twins represent systems as systems and processes, a jet engine or factory. Your human body, because your digital twin includes not just your buying behavior but also your genetics. There's a reason why, when 23andMe was going bankrupt, they were selling the data because that data about the human genome is extremely valuable. But the traditional digital twins include something that is observable and measurable.

And when the twin is you, it is no longer technical. I believe it becomes a personal thing. And sometimes it could become political. I don't necessarily want to go into that, but there are some reasons why digital twins, especially in our society today, are becoming such a hot topic. But it embodies your data, your behavior, every inference about you. And it doesn't just mirror what you've done. It's starting to, with AI, it starts to predict what you will do or what your reaction would be.

When you talked about external influence that's constraining you, this is where what you will do or what you could do is now being externally influenced by what they think that: of who you are and how you're going to behave within that constraint. So I believe this is the point where representation without consent becomes control.

And in AI today, these twins are not just static models. They negotiate, they generate and decide when two systems with two representative digital twins think that they are two different sovereign entities. But they are not. Because they don't represent you. They represent the system's interests. And those interests are optimized through the data that you have given up. So the result, I believe, is a systematic asymmetry. Your likeness acts in the digital world, but not on your terms.

So this is why I argue for what I call sovereign twins. These are digital twins and digital entities that include both your data and your representation. So a sovereign twin carries your intent as enforceable code. It's aware of your terms, aware of your boundaries. It has the opportunity to negotiate. It understands your rights, and it understands what it needs to do in order to uphold them.

To read those terms and conditions and say, no, I will not hit submit because I don't agree with the following. And, ideally, give you the opportunity to negotiate with that other entity who wrote the terms and conditions and say look, I would like to do what you want, but I would like for you to do what I want and let's have a conversation. So these sovereign twins have the ability to say yes or no, or to revoke the permission, or to manage the data. And it's reintroducing choice into the systems that were designed to remove it.

So the shift from digital twins to sovereign representation, just as kind of a last closing thought here, is about restoring authorship over your digital self and your data. So instead of being mirrored and mined, you're represented and respected. And you have the opportunity to transform the data from something that is acted upon into something that can act with agency. It's a living expression of consent and authorship and control in a world that has long, long, long ago has forgotten to ask for it.

KIMBERLY NEVALA: Yeah. And how fundamental of a shift is that? Not just perhaps technologically - in how we build these systems that do actually harvest and store data and information - but just in terms of how we think about engaging with digital systems and that permissioning process. Because to me, it sounds like a fairly fundamental shift. Is this something that is achievable? And what are the steps that we need to take initially to start us down that path?

KATI WALCOTT: In my mind, this is a fundamental pivot from convenience to governance.

I believe that today's assistants are akin to servants. They fetch, they summarize, they automate. They give you an ability to succeed at a shorter amount of time, potentially for less cost. But authorization mechanisms, by contrast, are gatekeepers. They ensure that the data used, any computation, and decision making adhere to your declared boundaries so that you become a participant in digital society versus just an actor.

So this is where I believe AI becomes accountable. Because it doesn't just help you write an email faster. It can actually enforce your will over your digital footprint in real time.

And I think the second part of your question was how fast can we get here. Is this something that's here today? It is not that it is actually being thwarted. I believe that when we first began and Tim Berners-Lee first started thinking about the world wide web and the access to knowledge, the intent was right. I think the architecture and the fundamental building blocks of how we went about delivering it, we never imagined that we're going to have conversations like this in this day and age. We believed we were doing the right thing.

So it is not that we are not able to deliver it, it is that the current systems are not designed to deal with data the way that we have designed it. It requires some fundamental building blocks to change and shift, and it is not impossible. The technology is available to do it.

But it will impact certain companies and businesses who built their core architecture and their core business structure on technologies that generate revenue and generate profit. Some of those things have to be unseated and those profit and revenue and income generator data points and data elements, they will shift. They have to shift in order for us to get to the type of society, digital society, that I believe we can get to. Something that is a little bit more equitable in partnership.

KIMBERLY NEVALA: Yeah. And so if we project that forward, if we do this and we're able to make that shift, what does this environment look like? And what are the implications, in your mind from where you sit, in your opinion, what are the implications economically, socially, other - pick your dimension - if we don't do that and we continue down the path we're on today?

KATI WALCOTT: So I had a very interesting conversation with my brother-in-law about something similar to what you're just asking. He asked me, he said, “when you think about human beings do we work in concert, or do we work individually?” And my sister also has a degree in biology and sciences. She said, “well, we are human bags of water and we are individual and we don't have the ability to think in collective.”

The opportunity for podcasts like what we have is to be able to discuss and disseminate knowledge and understanding in order for others to be able to pick that up, and to bring some of that knowledge and understanding and thinking and bring it to themselves, and to be able to generate new thought and new thinking. And potentially implement certain key areas of technology themselves, or some key thinking themselves. AI and digital space is nothing like that.

My brother-in-law spent quite a bit of time describing the concert in which systems systematically outthink us because they have immediate, near real time access to actual content and inference and the ability for them to think in concert through interconnection. So each one of those nodes are not individual. They are not human bags of water. They are an interconnected system who, at every node, have the ability to pull on other nodes of data and create a new frame of thinking. So this changes how we approach that.

So if we can forecast that, there are a lot of people out there that are already forecasting that when AGI comes to be it's going to be putting humans out of business. I think, to think in a positive way and tend to think positively forward, I believe we have an opportunity to create an equal and opposite force so that humans and digital things can coexist and coexist in a way that is going to make our world better together. But we do have to put those controls in place.

And I would say that the AI legislation, some data and privacy legislation, they are starting to make the right legislative decisions. But we need to put teeth in that legislation in order to put the control elements in place. And right now they are not in place.

So I don't like to forecast gloom and doom, but we are at the point, we are at the epoch of the zenith, where we have to make the right decisions. We are coming close to running out of time without the right controls in place.

KIMBERLY NEVALA: So this has been very interesting, and I know we've covered a fair amount of ground. We'll definitely refer folks back to your work, because I think what I find so extraordinary about your work is that you are, in fact, such a positive supporter of the tech, and you see the potential in such a great way, as evidenced as I said, by the work you've done throughout your career, to really provide this and move it forward.

And so I think that us asking those critical questions now to really get to that broader, really healthy, holistic, collaborative, beneficial vision is not at all a… I don't take anything that you're saying as a naysayer, as a skeptic, but as actually a very positive point of discussion.

So with all of that being said, are there any thoughts or questions you'd like to leave with the audience before we close up here?

KATI WALCOTT: I think that the longer we let data operate without enforceable ownership, the deeper the imbalance becomes between humans and digital systems.

And I think we can see some of the signs where, and again, you just said this is positive. But I do see that there are some places where we can see the early signs of feudal economics, where compute lords own the land. So there are digital producers, the data producers who are digital peasants.

I think we need to break the cycle, and we need to require the enforceable rights framework at the data level, not just at the regulation on top. Because we need to make sure that AI does not become a technological monopoly, but an opportunity for economic sovereignty and an equitable distribution of opportunity and wealth that comes from this incredible technology. And I believe that this economic sovereignty begins with data sovereignty.

KIMBERLY NEVALA: Yeah, I think wise words.

And to be clear, I don't necessarily think that - in fact, I would say that I don't think all of the trajectories we are on currently are positive. But that folks like yourselves who are raising their voice to say there is a lot of opportunity and there's some dangers - here be dragons - that we need to address now, is very positive. And I hold out hope that we have actually the time and hopefully we find the will also to make some of those changes. So that we can actually all benefit from what is a very powerful and interesting technology. So thank you so much. I really appreciate all of your work and the insights you've shared today.

KATI WALCOTT: Thank you so much, Kimberly. It was great to have an opportunity and discuss this exciting area with you.

KIMBERLY NEVALA: Yes. Excellent. And to continue learning from thinkers, doers, and advocates such as Kati, subscribe to Pondering AI now. You'll find us wherever you listen to your podcasts and also on YouTube.