Pondering AI

Theodora Lau banks on AI becoming our financial GPS and OS but flags required waypoints to protect consumer data rights, maintain trust and close the digital divide.

Theo and Kimberly discuss the progression toward a financial GPS powered by AI; consumer data rights and trust; the billion dollar question for 2026; analog identify verification; reducing risk and improving the customer experience; valuing people above transactions; the widening digital divide; upskilling and reskilling; cultivating curiosity and reclaiming time; financial security as the foundation for health; agentic commerce and AI as the financial OS; and always being human.

Theodora Lau is the Founder of Unconventional Ventures. A prolific speaker, author and advisor, Theo is an American Banker’s Top 20 Influential Women in FinTech. Recognizing that health and financial security are innately entwined, Theo works to spark innovation in the public and private sectors to meet the needs of underrepresented consumers.
 
Related Resources:
A transcript of this episode is here.

Creators and Guests

Host
Kimberly Nevala
Strategic advisor at SAS
Guest
Theodora Lau
Founder, Unconventional Ventures

What is Pondering AI?

How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.

KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala. In this episode, it is a pleasure to bring you Theodora Lau. Theo is a luminary in the FinTech space and founder of Unconventional Ventures. Welcome to the show, Theo.

THEODORA LAU: Thank you so much for having me, Kimberly, and thank you for the intro. It is nice to be reunited.

KIMBERLY NEVALA: Absolutely. I have to tell you, I was also delighted to learn recently that you are a fellow chemical engineer by training. There are a surprising number of us roaming about the place these days

THEODORA LAU: Really?

KIMBERLY NEVALA: Yeah, there really are.

THEODORA LAU: It makes my day. I don't normally run into one. And when I tell people I went to engineering school, my background was chemical engineering, they look at me like what are you doing here?

KIMBERLY NEVALA: Yeah. Well, I don't know. My experience back in the day was about 1/3 of us went into engineering for real, about 1/3 of us went into finance, and about 1/3 of us went into consulting or tech. So somehow it did seem to make sense that here we are.

THEODORA LAU: And it works out.

KIMBERLY NEVALA: It works out. Now, for folks who are not familiar with you and your work, can you tell us a little bit about what inspired you to found Unconventional Ventures. And also, what we might learn about your interests or the arc of your career based on your trajectory as an author: back from Beyond Good, which I think was in 2021, to Banking on AI which was published last year?

THEODORA LAU: Whoa, that is a long question. So I did start out as engineering and the majority of my career, I would say first half of it, was telecom and IT. So I spent a lot of time with various carriers and providers, building systems, tearing systems down, back from the landline days - so I'm absolutely dating myself - all the way to wireless and high-speed wireless.

Took a little break, and as luck would have it, I end up in a non-profit, looking at how can we use technology to help people get a healthier life as we get older. Now, it took me a few months, and then I realized, wait a minute, you really can't tell people to live healthy, and eat healthy, and do all these healthy things unless you don't have to worry about money. Financial security is the basis of a lot of things that we do that we have to build upon.

And so I switched my focus to working with founders and FIs and innovators, VCs, and what have you, to look at how can we create solutions to create a better and more secure future for everyone as we get older. That was about 10 years ago, and that's how my journey into fintech and financial services started.

I would be the first one to tell you I am not a writer. I am not good with writing. I never really took writing classes. As a matter of fact, I was born and raised in Hong Kong. And the education system there, since seventh grade onwards, you decide whether or not you want a STEM path or a non-STEM path. I chose the STEM part, which is physics, biology, and mathematics, pure math, everything except writing.
There was no writing. There was nothing of that nature. Same with my engineering school.

So I actually don't write in a typical sense. I write in sticky notes. I would have sentences and sticky notes, and I put sticky notes together kind of how you draw a block diagram. My thinking is, it's a process flow diagram. If you see the boards where I have my concepts, you'll see boxes and arrows. It's like a flowchart, and that's how all my chapters have been conceived. Whenever I have an idea, I just put it on a sticky note and stick it to the wall. And that's how my first, second, and third book came about.

KIMBERLY NEVALA: I love that you are engineering your book workflows as an author. So there's hope for all of us out there who came up on the technical side of the house. And for anyone who hasn't yet read you, you are actually a very good author. You're very authentic, and you get to the point, just very directly and in snippets that will stick with you, which is fantastic.

Now you are, and have been, really spending a lot of time looking at financial services and fintech, in particular. And write frequently about the rising, sometimes unrealistic. expectations for AI to quote, "do everything, everywhere, and all at once." It would be unrealistic for us to ask you about all the places that AI is being applied within this space. But as an industry insider, are there AI applications or categories of applications that you find most promising or exciting?

THEODORA LAU: I think there is a dream that I have, about 10 years ago now when I started in the NGO. And it's a dream not yet realized. It's way closer than how we've ever been.

You know how our lives is pretty much managed by this little guy right here. We walk around with it as a minicomputer that we carry in our pocket. It knows everything about us. I'll be lost. I'll be OK losing a wallet. I would be very lost if I lose my phone.

So if you think about how much we interact with this device and how much it quote unquote "knows” our habits, knows our circle, knows where we spend money, knows where we save money, know where we buy things and know what the different life stages that we might be in. It would be amazing if we can collate all of that data - the behavioral insights, and where we are in relation to the people around us, our financial obligations, past, present, and future - bring all of that together to help us create a GPS for our financial lives. Pretty much like how you go into a car.

Now, we don't even think twice. We definitely do not bring those AAA maps anymore. We don't print out MapQuest. That was years ago. But now, we just put in our destination, and we trust whatever application that we use to take us from point A to point B. We trust that it will reroute us and give us suggestions for different routes, if we need to go from here to there, should we want to stop at a different destination or a midpoint.

If we were to think about financial services in that way. I know where I am right now. I know my obligations around us. I know in just about two years time my son will be in college. I know another two years after that my daughter will be in college. And I know somewhere around here my parents are now - my dad just turned 80 last year. So somewhere, sometime, I will need to start picking up more financial caregiving capabilities and responsibilities. I am going to be 53 this year. So you can add all of these data points. I know eventually, I want to retire. But how do I go from now to where I need to be with all the steps along the way, with all the things I need to do?

And that's where I have a lot of hope that when we think about data around us - not just financial data, but all health data, personal data - when I think about the ability for algorithms to run through all the different scenarios for us that it can guide us to what we need to go.

KIMBERLY NEVALA: Now, this is really interesting. I think I personally have a bit of an angst about that view of the future. I was at Sibos, I believe you were too, last October. Was it October? And everybody there, every provider there, whether they were a traditional bank or a neobank or a payment processor, were expressing this almost ubiquitous ambition to become a lifestyle partner. To be the trusted brand – or, these days what do we say, copilot for folks' lives.

And frankly, for me, I'm not sure that's what I want from a bank. I want to be able to transact business quickly, effectively, efficiently and not necessarily be reliant on a single provider. Especially when we widen that view and we look at every retailer out there who wants to do the same thing. Amazon wants to do the same thing. Apple wants to do the same thing. Across all of these areas, everybody wants to do that.

So what is your perspective on that aspiration and how is that changing how maybe financial services companies view consumers? And what are some of the implications for gaining and retaining their trust then?

THEODORA LAU: I think that's an absolutely valid concern, and I share a lot of those.

I am very suspicious of any entity who wants my data. The first question I'll be like, why do you want it? And where are you keeping it? What are you doing with it? And as consumers, especially in the US, unfortunately, we don't really have a lot of good answers to those questions. We don't have a lot of the same protections that some other jurisdictions might offer to their citizens.

I think where I would love to see this future would be some sort of a independent data store, for lack of a better word. A data sphere where all of my data is my data. And I have the right to assign different permissions to access that data. Not really me sending you the data but you coming to access a certain portion of it. I control the rights and I control what you can do and for how long. And then when you're done, you're done. So I would love to get to a state where we can do that. We're not there yet.

But I think back to some of the more practical examples - health data for example. How many times do we go into different health providers and we have to provide the exact same information over and over and over? I don't even know how many copies of my health data is out there. They all ask for the same thing. And then they store that; you don't know exactly where. Why can't I have control of my health data? And if I need to come in for my regular checkup, I say, OK, here are the pieces of information that you need. This is the update. You come to me, and this is what I'll give you permission to access. I would love to see that.

And if we can get to that state where I have control of this layer then I would be OK with, well, here I have my own little personal agent. And it can do all these things for me instead of me trusting a particular entity, be it a big tech, be it financial services company, be it a fintech, what have you, to do that for me.
I think from that perspective, it feels safer. It feels more like we have control. Because right now, I think a lot of us feel like we are the product. That's what people often say; we can use email for free because they have access to the data that we have in the sellers, ads, et cetera. Why can't we change that model?

KIMBERLY NEVALA: Yeah, it's a great question. Why can't we?

And certainly, you see then, I'm using AI broadly, including but not limited to generative AI, as a key facilitator of that type of a model. It is interesting, however, and I'm going to come off in this episode as, I guess, the financial services skeptic. But we definitely see in areas like when it comes to security, where this is both a boon and a threat. Generative AI, in particular.

And I've had some interesting experiences of late, where I had to call the bank for something. If I have to call you, I'm probably already not very happy. And I was asked for a voiceprint for security. And I thought that just doesn't seem like the most secure approach here. Or I was trying to execute a money transfer for personal reasons, and I had to call the bank and I was basically interrogated. So I mean, they were asking, why are you making this transfer - to which I responded, it's not your business - right down to can you tell me which computers you usually log onto your online account on? I was horrified. Tt was not a particularly pleasant conversation; probably for me or the poor customer service agent that was treated to me that day.

So can you talk a little bit then about this idea? In this world today where we don't have control of the information, what can financial services companies do to make sure that they are gaining, retaining our trust? And then as a B, and we can come back to this, how AI today is impacting their security posture. How do we make sure that we're developing services that balance this tension? So they are protective without feeling like they are punitive to the consumer themselves?

THEODORA LAU: I think that is the one-billion-dollar question for 2026, Kimberly.

I think they're all trying to figure it out, to be honest. I was chatting with someone right before the holidays, and he mentioned that their institution actually got to the point where they asked people to come into the branch to verify their identity if you do lose the credentials and you're having trouble resetting. Because they said, we just don't know exactly what we can trust and what we can't trust anymore. So that's the flip side of the trust equation. That horrified me. I'm like, oh, my god.

But yes, I think I understand because there was another FI that I was dealing with, it was my business account, and it took a whole year just to update my phone number. Because when I opened the account, I had to apply in paper. They print out a form for me and I write down the information. And they took it and they said we will open your account in the next two weeks because we need to send this to someone to type in the information. And of course, when you have so many manual, analog procedures - where I'm like why can't I just apply this online - mistakes happen and then they have to find ways to basically re-authenticate me. So anyway, that's a side topic

I think with regard to the trust question, they need to show the customers that they can be trusted. I think that is the number one thing. That means that having the proper security protocols in place, that it doesn't look like you are selling their data, that you don't care about the security of their account.
Having the fraud procedures in place, I think that's really important, especially with the amount of fraud that's been going out.

Because fraud fraudsters go where the money goes. Of course, they'll go to the bank. But with them having access to these tools that they can create, they fix so much easier, so much faster and so much more believable. Because now, we can't really look for did you have a typo in here? Did you say you're the Nigerian prince where you are? Now it just looks so real. And how do consumers then figure out whether this is real or not?

So for FIs who actively come out and demonstrate to customers that they can be trusted. Because they have all of these procedures and checks in place. That they educate you. That they have your best interest at heart; that they actually look at you as someone banking with them and not as just a number that they can exploit. I think all of those actions can help build out the trust.

Now, you and I both know that building up trust is not easy. Losing it is very, very easy. So if there are fraud events that happen be there and help the consumers out. That also goes a long way in establishing trust.

And that's actually one of the reasons why, back in the day, when I first started, when I was looking at that time, I think a lot of FIs were trying to chase after millennials. And I remember distinctly, I was on stage and I was talking about why don't we use the technology that we have, why don't we use some of the newer solutions that we have, to help caregivers who are taking care of their family? At the same time taking care of their children? Because oftentimes caretakers are women in their 40s. With their own family members to take care of, with their parents that they have to take care of. They're being stretched so thin.

If you're trying to go after a demographic, if you're trying to gain trust, the best way to do that is to show that you care and show that you have a way to help them out. Why not this? And I remember, this was 10 years ago now, but someone literally, he looked me in the eye, and he said we don't do old people. Now, how far does that go in establishing trust, AI or not? Why would I want to come to you if this is how you think of me? You just want the business of my youngsters. You don't care about me and you don't care about my parents, who I care about. That's a no go.

So I think there was a lot more that we can do. We have the data points. A good friend of mine once told me, she said, if you think about the process of applying for a mortgage, for example, for credit. Your first interaction is at the point when you have all these piles of paperwork, talk about hundreds of pages, that they check everything; your income, the past five addresses that you have, et cetera, et cetera.
They know all these things about you. They use it for that one milestone. Which is the transaction. Which is extending you credit.

When is the next time you hear from them? When your first payment is due. When is the next time you hear from them? When your second payment is due. That's it. They are only there for you for your first transaction and when they need to get your money.

Now, smart providers will think about - wait a minute. Now, I have all of this information about this particular family that just purchased a home. What else can I do for them? Not now. Maybe a few years down the road. Would you be looking to replace the windows? We know this is very expensive. How can we help? Would you be looking to upgrade or renovate part of your house 10 years down the road? Or you look like you might be an empty nester soon, et cetera, et cetera.

There are so many moments - not just in a major milestone, but also in between moments - that we could provide services for the people that we serve, if only we pay attention.

KIMBERLY NEVALA: And would you see that happening in an opt-in way versus a pull versus a push? Or should people have the option to make that decision?

THEODORA LAU: I think people should have that option, otherwise, it would be a little creepy.

KIMBERLY NEVALA: Yeah, it could be. It could definitely be a little bit creepy. So knowing that we have to start where we're at, are there examples - call them use cases, call them applications - where you've seen recently, even within the ecosystem that we have where folks are really applying or have used AI well to really lean into and improve that consumer experience?

THEODORA LAU: I think a lot of the - we're not going to talk about the chatbots, because I think that's been talked about often, too much. Some of the more recent ones that I love, so that was a use case I quoted in the book, where it's actually from a German service provider.

And what's really interesting about them is they target a particular segment. They work mostly with medical students in universities. And what they do is, here, for example, in the US, typically the first reaction would be, oh yeah, you give them student loans and then be done with it. But what they do is, they actually look at the entire community, for lack of a better word, of medical professionals. Starting from when they're in uni and helping them create a community with the people that are already practicing.

And when they graduate, when you already have that connection and network in place, then it's easier for them to find practices that they can join. As you grow along in your career path, they can start providing you with wealth management services. And when you get to a certain stage where you want to retire, then it goes back full circle. They can connect you with the younger professionals who just graduated, who are looking for practices to buy.

So it goes a full whole cycle, if you will, for lack of better words, for the professional journey. And I thought that was amazing. Because you already have the data, not just from the initial point where you're extending the loan, but all the way through. So why not use that data to provide more services? So that was an interesting one from Europe.

Another one that I like is actually in the US around fraud, to be able to do multi-modal authentication. So that it's not like you call and you first establish the account and my voice is my password. We've seen that quite a bit the last few years. This is more than that. This is not requiring you to say a very weird phrase. But in conversation it can detect a lot of the data points, kind of like your use case that you originally first talked about, but in a slightly less creepy way. And then to be able to present the risk score to the agent so that you can tell whether or not this could be a potentially deepfake or this is an actual customer calling in.

And what the credit union has reported is that not only has it reduced the amount of time it takes to authenticate the call but also improves the customer satisfaction. Because it feels more natural and is weaved as part of the flow.

So those are really good use cases, because we know, again, fraud is a huge problem. So how can we use technology to, not just reduce the risk, but also, to improve customer experience. I think that's important.

KIMBERLY NEVALA: And I know a real area of passion for you is expanding access. Really enabling folks to have, we'll call it fiscal sovereignty or fiscal agency, and this view of success beyond the tech.

So as you're talking there, again it does presuppose a world where you are capturing a lot of information about the consumer, a lot of very personal information, biometric information. But it also presupposes that folks are digitally enabled.

So can you talk a little bit about - and again maybe just categorically or on balance - if our current approaches to AI, to digital transformation in financial services, are widening or collapsing the digital divide?

THEODORA LAU: I would love to be able to say it is closing the divide but from everything we've seen so far, I think it's increasing the divide in many different aspects. Not just in the usage of it but also in who can have access to what and how.

So first you look at the models that's out there. Majority of them are trained on Western languages. The world has quite a few billions of people. Not everyone's first language is English. And so that already precludes a lot of the data set from demographics of people whose first language is not English. Or if their language and their culture is not documented on the internet. Because a lot of the models trained on data that's available on the internet, for demographics whose culture is more verbal.

Again, it's not a part of those big, universal models, if you will, for lack of better words. There are over 7,000 languages actively being spoken today. How many of these are actually represented in the models? I would say no more than 100, if that much. So that's already creating a divide, if you will, on who can benefit from the technology that we have.

And then you talk about the rest of the digital access. Let's talk about two things, right. First, the people who can develop these tools. If you look at a lot of the work that's been done in the West - I don't like to say in the West - but in the US, for example, or in Europe. A lot of these large language models, that takes a lot of energy, takes a lot of resources to deploy. And a lot of times, the data centers are in places where it's already water stressed. So we know about the electricity implications, we know about the impact on the environment, et cetera, et cetera. Until the open source models, the smaller models, that've been around, what, for the last year, that has been the case.

So now, when I see more of these smaller language models, perhaps it could provide a different pathway for the developing countries and communities to have access to the tools in a slightly less resource intensive way. So that's a good thing. We know how much the GPUs and all of that cost. So the tech stack alone, that's expensive.

And now you look at, for example, within financial services for the last what two years, all the studies we've seen for the ones that have been able to balloon to the top. Be it Evident AI’s research reports, be it I think CB Insights has another one. All of these. If you look at the organizations that have been able to take advantage of the technology, they're all the big FIs.
They have the resources to experiment, they have the resources to get access to the resources they need to develop the solutions, et cetera, et cetera, et cetera.

I would love for this to change, and I think it will slowly change. It will require - I don't want to say courage, but perhaps courage is a good word to say - courage to try to identify what they can do with the technology. A lot of the perception is, oh, my god, this is going to be such an expensive and long undertaking. We don't have resources. That's predominantly what I always hear. Can we flip that script? If we can, that would be wonderful.

And then talk about the people who can use the technology and the applications and the tools. Again, you need the education. You need access to the tools. We need to find ways to get people more - access is an overused word - but exposure to what the technology is and what they can do with it. I think that is a big, big task. Education, re-education, upskilling, reskilling. It's not just about, oh, here's this big, wonderful, magical ball. If you're brave enough to use it, great. Stay. If not, you're out. That is not really a good policy.

How are we going to help people adjust and adapt to a new world? Until we solve all of that, I am worried that the gap is getting bigger and bigger.

KIMBERLY NEVALA: Yeah. And certainly, the gravity right now is around large language models, foundational models. But how have you seen - whether, again, it's the more traditional what we think of as banks or other types of financial services providers, maybe the neobanks - leveraging more traditional techniques? Because there's actually quite a bit that can be done. A of the engine rooms in these organizations are still run with more traditional techniques.

So is part of the solution here keeping a broader eye on not everything has to be AI? And the things that can be AI don't necessarily have to be this one element that does have this gravity and high barrier to entry?

THEODORA LAU: Absolutely. I think the board needs to understand that too. Because I feel like a lot of times executives are running around because the board is asking them to do x. They might not understand the more holistic picture.

And as you say, the technology actually has been used for quite a while in the backend. We've been running that in a lot of the backend processes for a long time. So the whole excitement in the last few years is predominantly because of OpenAI and what that lot has been able to do and create the hype, for lack of a better word.

So I think if we can just take a pause and stop throwing things at the wall and look at what exactly needs to be done? What are the jobs to be done and how can we solve it? Maybe it doesn't always have to be AI or maybe it doesn't always have to be this particular type of technology. Maybe it could be something else.

I do fear though, our industry very much loves to run after the shiny, new toy. It's like, oh, what do we need to do with quantum? What do we need to do with blockchain? What do we need to do with stablecoins, et cetera, et cetera.

KIMBERLY NEVALA: So in that environment and acknowledging just the nature the beast, if you will, both as organizations or corporations, is there, practically speaking, some attributes or attitudes that matter most for leaders? Then I'll follow that up with asking about us as consumers.

THEODORA LAU: Curiosity. Curiosity. I always tell people, and it's not just for leaders, it's for everyone else, be curious. Like kids, it always amazes me just watching them grow because mine are teenagers now. So they have their own opinion and are very much set in their own opinion.

And I often remind them - remember when you guys were just a few years old? The favorite word they always ask was why. Why is the sun going down? Why is it always the east? Why is it not the west? Why? Every day, there's at least 10, 15, 20 different whys. And then the kids discovered Siri and Alexa. And so they're like, all right, Mommy, we're not going to ask you anymore. We're just going to keep asking Alexa. And now, they're like, well, we don't even ask… because. And now, the tables turned. I'd be like, why are you doing this? They said, well, just because.

So if we allow ourselves to perhaps, pause and go back to how curious we were when we were kids and just ask a question. It doesn't always have to be making others uncomfortable but just ask as a way to learn. Somehow I think we've lost that.

KIMBERLY NEVALA: Yeah, I tend to agree. I tend to agree. And I don't know why that is. I don't know if it's because we're impatient. Or because there's just this push for conformity and acceptance or et cetera, et cetera. I'm sure it’s a whole lot of things that come into play. Not wanting to show ourselves as non-expert. Wanting to show up as a knowledgeable person in the world. So on and so forth.

Certainly not a new phenomenon, but perhaps, one in the space of AI. I think it's because everybody is now so aware of it and so gung-ho in a lot of cases. And because there was a lot of hype. And I think it can feel more fraught for folks to raise a hand and ask the question. No one wants to be seen as someone who's resisting or anti-AI or any of these sorts of bits. Maybe the language there is difficult. Yeah.

THEODORA LAU: But I think also we are running too fast. We are always in a hurry.

We like to talk about how modern-day tools, digital tools can help save us time, so we can reclaim time. But in reality, we don't reclaim. We just get ourselves even more busy. We fill more things in our agenda.
Maybe we don't have to, right? Maybe we should give ourselves the opportunity to actually use that time that we've gained.
We brag about these tools providing, increasing, what, efficiency. People can save 3 hours in a week, et cetera. Use that 3 hours to take a break, to pause. We don't have to add 3 hours of work because we saved 3 hours. And that's not a radical idea.

KIMBERLY NEVALA: It shouldn't be. It shouldn't be a radical idea.

But I wonder if there's a corollary here when we think about, in the context of even something like financial services. Where you were talking early on about wouldn't it be great if you could, for a medical student, help them start to develop some foresight about what they're going to need in the next phase of their life and how this is going to move right down the path. Or I have this conversation with my own nieces and nephews about just starting to put a little bit of money away now. Even though I always tell them I saved more money when I was making less money. And there's a really important lesson in there somewhere.

But thinking about these things early and often. Where with all of the rich set of information we have, could we also, as financial services providers, be looking not necessarily to force a decision or push an action at somebody but just to provide them guidance. And actually saying, hey, here's some information for you to consider very openly and in that way develop that sort of relationship. So it's a guidance and not action, I suppose.

THEODORA LAU: Yeah, guidance or perhaps, not shaming.

We like to do a lot of well, these younger people, they don't know how to save anymore. Or worse, people your age, they have done X already. But why? On one hand we tell everyone we are all different. And then on the other hand you're like, well but wait, people in your group, living in your zip code, they have done all these things so you are behind. That's not helpful.

I think the other part too is, someone told me this, that it's really hard to think about what you might need in 10, 15 years when you can't even figure out what you're going to do next month. So giving that base of financial security, addressing really what's on top of their mind at the dinner table, that's also important.
You can't tell someone, hey, you really need to start saving for retirement when they have $200,000 in student loans. How do I do that? I can't even pay my rent right now, and my wages might be garnished. And now, you're telling me I need to start saving? We can't do that.

So I think things are way more connected and interconnected than at the surface. And so if you just try to solve for one or push guidance on one and neglect the others, we're not going to go anywhere. It's just going to keep going in circles.

KIMBERLY NEVALA: Yeah. And I suppose that comes back to, when I think about that, I think about it more as: here's some resources you consider. And you can pick. It's sort of like the Choose Your Own Adventure books. I would love to see providers that are saying, here's the Choose Your Own Adventure book about things you can consider. And you tell us what the right path is. As opposed to them telling me what the right path is at that moment. We say, as two folks who started out as potentially thinking we were going to be chemical engineers. So there you go.

THEODORA LAU: Let's mix things in the lab and see what happens.

KIMBERLY NEVALA: Yeah, let's see what happens.

Now, also in your book - and I think you may have referred to this earlier a bit when you were talking about financial GPS - but you predict a future in which you said AI is the financial OS. And that seems to be a very timely and appropriate - I don't know if I should call that a prediction - that is certainly gaining traction under the guise, particularly, of agentic AI.

So I would like to talk a little bit about what the impact or implications there might be. But starting first, can you define for us what you mean or what it looks like when you say AI is the financial OS?

THEODORA LAU: That concept came - it was actually constantly evolving, especially last year as the more I talked to people and as I saw the pace of new tech being developed and dropped. So it came from the phone.

When I think about the iPhone, this was back in 2007 when the first iPhone was launched. And I remember I was this nerd that was so giddy with excitement. I was working for a different wireless carrier at that time. So I actually lined up outside the store and I waited for the first phone. And I was so happy and so excited. But I couldn't show it at work because I actually signed on with a different wireless carrier while working for my company who are competitors. So I had two phones, and I kept playing with it.

And I remember I took it to Tel Aviv a week after for a business trip, and I was showing people oh, my god, you have this thing, and you can access the internet, and you have this interface that is so cool. You don't have the keyboard anymore. And here is a weather app. I don't know why I was constantly checking weather, but I thought it was really cool just because you could. And I came back with a data roaming bill that was over 300 some dollars just from a week of nonstop checking the weather app. And I was like, at that time, I'm like, wow.

Now, 2007, if we go back in time and think about that time. You had Nokia, I loved Nokia, the bar phone that we had. And people were looking at this iPhone, they're like, well, no one needs to carry a minicomputer around. There's no use case and a da, da, da, da, because it needs all of these things. So everyone comes up with all kinds of reasons why it wouldn't take off. Almost 20 years later now it becomes indispensable.

Now, we're carrying data around us. Now, we are transferring money in the middle of a restaurant. Now, we're getting a car ride in the middle of Manhattan on a rainy Friday afternoon. Our life has changed upside down. Our lifestyle became very mobile.

A lot of these developments happened, not because of the iPhone, but because of the iOS. Because of the ecosystem that it created, it brought together all the developers. It brought together new ecosystems of players. It brought together new industries and new ways of thinking about how we can use data.

So if I were to take that concept and fast forward to here, to the moment that we are in right now. Where when we wake up, we don't just check email, oftentimes, people say they wake up and they chat with their generative AI chatbot. No, I'm not going to make judgment on whether or not that was a good thing or not. But the little bot that people have been using and growing attached to is becoming an indispensable part of a lot of people's lives.

To the point where there are now more people using Gen AI chatbots to search for banking services then there were a year ago. People would talk to the bot - there are studies being done between Perplexity, ChatGPT, and I forgot which one, I think the third one might be Grok or Gemini. But they were looking at the patterns of people, what they're using these bots for. They're asking for advice on how do I lower my debt. What is the best way to find lower cost services, xyz? How do I get out of debt, et cetera et cetera?
So not just the silly questions you know that we ask. Not just the work type of questions that we ask. But actually, personal finance questions that you would think people would have gone to their bank for. And now they're asking the bot.

And you see the development of agentic commerce, where people are like well you initiate the search and the product discovery in one of those bots. And guess what? Not only can you discover the product, now you can go all the way down to checkout to pay without leaving that interface. So everything else become abstract. The UI that you have built for your bank could one day become abstract, when people just go into the bot and they're looking for things. You might not even show up in their search. You might be invisible. You might not be relevant.

And so all of these developments in the last six months brings the question of how are we going to live our daily lives? And increasingly, what people call AI - again, in a very broad term - is going to be the center of the machine that drives our day-to-day lives, and that's where the AI is our new OS comes from.

KIMBERLY NEVALA: And you can see some obvious benefits from that in terms of having an intuitive way - it's easier to speak or to write in natural language - of doing that. And there's a clear downside, though, that in that disintermediation between you and the providers, that potentially, your access to services, and in fact your exposure to products and services at all, are now dictated by a few intermediaries who will transact with another select set of providers.

THEODORA LAU: Absolutely.

KIMBERLY NEVALA: What are the implications, then if we continue down this path? And what are things that financial service institutions need to be thinking about and consumers as well?

THEODORA LAU: I think the threats that you were talking about is very real. Because these guys, the Googles of the world, the OpenAIs of the world, they're not developing these protocols and developing all these things out of the goodness of their heart. They know this is where the market is going, and they will, one day, profit from it.

Could they strike out exclusive agreements with certain big retailers that says, hey with this, we're going to send preferential traffic to you. We're going to surface more of your product instead of the mom and pop down the street. Why not? It's our algorithms. They can control that. And I think that threat is very, very real. And there are no guardrails in place, there are no regulations in place, and it's very much like a black box at the moment. So that would be my one biggest worry.

From a bank's perspective, there is a company called Avenue Z and two months ago, I believe, they came out with a report on visibility of digital banks in the AI world. And what they did was look through searches when consumers use the generative AI bots and what brands that show up. Over 83% of the brands that show up are the top 10 that they listed. That's it.

So if you're not one of those 10 then you might have a problem. Because people won't even come to find you. They won't even know you. And I think that will be one of the big problems that financial institutions have as they go more digital. People are talking about closing branches. How do you stay visible to consumers when they are not coming to you for service, when they're relying on a layer that is abstracted from you? How will you show up where they are? I think that is a huge question for FIs, and if they're not thinking about it, they really need to be.

KIMBERLY NEVALA: So as you look out over this very exciting and turbulent in many ways landscape, what are you going to be keeping an especially sharp eye on in the coming-- I was going to say months, but really, pick your horizon, month years, weeks?

THEODORA LAU: One week at a time. I think-- oh, god, a lot of things.

I think first and foremost, I do worry a lot about impact on people and on consumers. That's always a big part of the things that I read about, both personally and professionally. Because at the end of the day, we are serving people. We're not serving bots. At the end of the day, you and I are talking. I'm not talking to your avatar. These are real people, and these are real lives. So how would the evolution of technology impact people who get employed or not get employed? That would be one big thing, impact on labor.

Impact on who gets a voice and who gets visibility in this new world, that will be the second big question. We talk a lot about technology bridging gaps. And as you mentioned, that might not always be the case. We talk a lot about everyone needs to have a voice. Well, based on everything that has been developing, both from a technology and from a social perspective, I don't see that being what we will see. So how can we work together to reverse that, to make sure people actually do have a representation on what we're doing? I think that's a huge one.

Risk is another big one. As much as we are evolving the technology and using it for different use cases in financial services bad actors are also using the technology and evolving it to scam people. So how can we evolve technology and how can we evolve the risk framework to counter what bad people can use the technology for? How do we protect consumers? That is the third big one. And that's a lot. That's a lot to think about.

And if I might add, I think the last one I would be a little bit worried about is how the different jurisdictions work together. Because if we see where the US, where it's heading towards, versus where the EU is heading versus what for example, China. China came up with huge, new set of proposed regulations around AI on what they can or cannot use. You have the EU AI Act. You have Australia, you have a lot of different entities, and then you have the state-level regulations that are coming into play. How can providers navigate around that?

I mean, the ecosystem is getting more and more fragmented, when we're supposed to be coming together to use the technology and solve problems for good. And I feel like there's a lot of hurdles that we need to go through, that we need to break down before we can get there.

KIMBERLY NEVALA: Yeah. And so what final advice or words of wisdom would you leave with folks so that we remain heartened by and pursuing the great potential and don't just get downtrodden at all of the roadblocks ahead?

THEODORA LAU: Oh. My kids would tell me, touch grass.

KIMBERLY NEVALA: I love that. I love that your kids are saying touch grass.

THEODORA LAU: Right. That's what they're saying. They're like, Mommy, touch grass. I would say disconnect. Disconnect once in a while but disconnect and talk to people. Talk to real people. Reach out to people and just be human.

I don't even know if that's the right advice but be human. Because it's far too easy to just stay in our own bubble and work through a problem and come up with 500 reasons why we can't do something. Perhaps, if we get there, just disconnect. If we find ourselves in that state disconnect and go talk to people.
Go grab a cup of coffee. Go out and do something that is anti-digital. Not in a bad way, but go back to the analog days, If we would. Read a book. Go make a cup of coffee. Something. Just be human.

KIMBERLY NEVALA: And if folks are looking for a good book to read, we have some suggestions for you, and we will put those in the Show Notes. [LAUGHING]

So on that note, I will thank you for your insights and we will leave that advice and those key questions ringing in folks' ears. Thank you so much for joining us today and sharing all the things. I know we covered a lot of ground today, and I really appreciate it.

THEODORA LAU: Thank you, Kimberly.

KIMBERLY NEVALA: Right. So to continue learning from thinkers, doers and advocates like Theo, subscribe to Pondering AI now. You can find us wherever you listen to podcasts and also, on YouTube.