Pondering AI

Miriam Vogel disputes AI is lawless, endorses good AI hygiene, reviews regulatory progress and pitfalls, boosts literacy and diversity, and remains net positive on AI. 

Miriam Vogel traverses her unforeseen path from in-house counsel to public policy innovator. Miriam acknowledges that AI systems raise some novel questions but reiterates there is much to learn from existing policies and laws. Drawing analogies to flying and driving, Miriam demonstrates the need for both standardized and context-specific guidance.  

Miriam and Kimberly then discuss what constitutes good AI hygiene, what meaningful transparency looks like, and why a multi-disciplinary mindset matters. While reiterating the business value of beneficial AI Miriam notes businesses are now on notice regarding their AI liability. She is clear-sighted regarding the complexity, but views regulation done right as a means to spur innovation and trust. In that vein, Miriam outlines the progress to-date and work still to come to enact federal AI policies and raise our collective AI literacy. Lastly, Miriam raises questions everyone should ask to ensure we each benefit from the opportunities AI presents. 

Miriam Vogel is the President and CEO of Equal AI, a non-profit movement committed to reducing bias and responsibly governing AI. Miriam also chairs the US National AI Advisory Committee (NAIAC). 

A transcript of this episode is here

Creators & Guests

Host
Kimberly Nevala
Strategic advisor at SAS
Guest
Miriam Vogel
President & CEO, Equal AI

What is Pondering AI?

How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.

KIMBERLY NEVALA: Welcome to Pondering AI and one of our inaugural YouTube episodes. I'm your host, Kimberly Nevala.

Today, I'm beyond thrilled to be joined by Miriam Vogel. Miriam is the president and CEO of EqualAI and the chairperson of the National AI Advisory Committee - which I usually call NAIAC and I'm not sure if that's the appropriate term…

MIRIAM VOGEL: That's the affectionate term. It's appropriate. Yes.

KIMBERLY NEVALA: Alright, got that right. Miriam has a fascinating and extensive background in policy development and advocacy. Tell us a little bit about that because, as I just said to you, I think you've done enough work for two professional lifetimes. What was the spark that shot you into the work that you're doing now with EqualAI?

MIRIAM VOGEL: Well, thank you for the generous introduction. My work did not seem the optimal path for the work I'm doing now. But looking backwards, it was actually a perfectly placed path to work in this space that I feel so privileged to be doing. Of looking at how we can make sure that AI is safe, and inclusive, and fair, and effective.

The short version is I started off in policy and went to law school really interested in how frameworks could bring safety and how we can make sure that civil rights are propelled forward in all different venues. I was an in-house counsel and worked in technology, worked in IP, came back to join government. And in that capacity, I both worked with tech through the legal space but also worked on bias mitigation.

So flash forward a few years later when EqualAI was being formed. We were realizing that bias and harms in AI were the same as issues we've dealt with in the past. It's similar to addressing bias and harms in the workplace, which I was fortunate to lead President Obama's Equal Pay Task Force. I was very fortunate to work for Sally Yates when she wanted to create implicit bias training for federal law enforcement. So a lot of these issues are similar issues with similar solutions in this new medium of AI.

KIMBERLY NEVALA: It's an interesting point because so often, when we're having these conversations, people really speak as if AI is an entirely new monster or creation. Would we benefit from taking a step back and thinking about this as perhaps no different? It operates in a different way, but no different from the outcomes as previous products, services, processes.

MIRIAM VOGEL: There is so much to learn from precedent. Both in how we handle innovation, how we spark innovation and how we make sure that we have more democratized access to the innovation.
So yes, we've done this with airplanes. We've done this with cars. We've done this with computers, broadband access. We've been thinking about how we make sure that innovation is safe, and that it is inclusive, and that more people can benefit from it for a long time.

There are, to be sure, different aspects of AI, absolutely. There are elements, for instance, the fact that as we regulate it, it knows no borders. AI is crossing borders without any concern for what jurisdiction it's in.
And yet our laws are generally domain specific and jurisdiction specific. So that's one of the new elements. Certainly, the technology presents new questions. But I think that is very fair to say that so much of what we need to understand about AI, and how to regulate it, and how to support it can be based on precedent of what we've already seen and done.

KIMBERLY NEVALA: What are the most impactful or important precedents that people tend to over overlook?

MIRIAM VOGEL: Good question. I think there's a few ways to look at it.

First of all, there's the laws on the books. And second of all, there's conceptually ways that we've regulated industries and ways to ensure safety and ensure trust in the systems. And that's what we're doing and need to be doing with AI.

So on the latter point, if we think about we flew here yesterday and today. We drove on the roads to get here. We would not have done so if we did not believe that the flight would have taken off and landed safely, not crashed with other airplanes mid-air. That there are mechanisms around to ensure that there's universal standards. Some of those are domestic and some of those are international. There's systems in place, and there are coordinating functions in place to make sure that there's alignment on what those international standards are so that we trusted those planes to bring us here.

Likewise, on the roads, we have lines. We know that, for the most part, people will stay within those lines. Seatbelts, airbags, more and more innovations that help us ensure that, as we are traveling by car, we can trust that it is safe. We'll get to the other end intact, safe, alive. We have different speed limits in each jurisdiction, but we know there is a speed limit. For so many reasons, that is a good analogy with AI. We're not going to want the same level of regulation, the same types of regulations, across the board for every use. But we know that there are certain issues that need to be standardized and regulated.

In terms of the laws, there's a deep story here. I'm excited. We have a paper coming out from an NYU law journal in July because I think too many people think that this is lawless, that this is unregulated and we're in the wild west. That's just not true. There are certainly novel questions in areas where we need more clarity, but there are a lot of laws on the books in the US and across the globe that can be and are applicable to AI use.

KIMBERLY NEVALA: Yeah, I do find it perplexing that, somehow, the fact that a product has an AI model embedded in it or it's an interface that's AI driven somehow confounds people or confuses people about the fact that it is still a product. It's still a service. There should still be an expected level of quality, a certain outcome we're going for. So I think what you've just said reinforces a bit of that as well.

MIRIAM VOGEL: Yeah. There's two pieces of that, many, but two in particular.

EqualAI works with companies to help ensure that they know how to be a responsible AI leader. It's not clearly defined what that means right now. So we want to make sure that we learn from the laws that are out there. That we learn from best practices from those who want to lead in this space because my belief is that most companies are now AI companies and don't realize it.

And if they do, they need to realize that they need to have a framework in place. Much like cybersecurity 15, 20 years ago when too many companies didn't think they needed a cybersecurity plan. They didn't think that they were under threat until it was too late. And at that point, once you're under the threat, it's too late to put your plan in place.

I think that's where we are with AI. Most companies need to have a governance plan in place, a framework for responsible governance. And if they don't realize that, it could be too late.

KIMBERLY NEVALA: So let's talk a little bit about corporate governance. The first question I'll ask is, what are the net new questions that companies need to address or think about?

MIRIAM VOGEL: One issue with AI is how many hands it touches, both within your organization and outside of your organization. Yet there's no clear standards for demonstrating how it was tested, for whom it was tested. What we talk about at EqualAI is having good AI hygiene. And I think that needs to be adopted across the board.

We need to be normalizing this idea that AI is not for someone else to regulate and safeguard. Yes, the AI companies have a lot of work to do to make sure that the products they're putting on the market are safe. But there are many reasons why it is in a company and organization's best interest to make sure that they are doing their best to safeguard anyone touching it. And so - happy to talk more about that - but in general, good AI hygiene, making sure that you have someone accountable.

At the end of the day, where does the responsibility lie in your organization? So when the incident comes up, when the framework needs to be improved, when the budget and other considerations are at issue, there is somebody who owns it. You need to make sure that there is clear documentation, what was tested and when. At the end of the day, you're going to want to know what were the use cases that it was envisioned for and who are the people for whom it was built because you're also going to want to know for whom could this fail.

KIMBERLY NEVALA: So we could take this a lot of places and at the risk of asking an unfair question - it's not the risk, I'm going to ask it. What are the elements that organizations overlook or tend to downplay the most at their own peril?
MIRIAM VOGEL: Well, abstinence is not an option. School districts, companies, we all need to realize that AI is a part of our lives.

I've had too many companies tell me we don't need to think about responsible AI governance. We don't use AI. I tell them to go talk to their HR teams. And 100% of the time, they come back and realize they are using AI in lots of places. Including some pivotal functions, where they really want to make sure they have oversight.

The other area is thinking about the reasons why responsible, trustworthy AI use is a benefit to them.
It helps with employee retention. Employees want to be a part of something that is meaningful. They want to make sure that the AI that is either supporting their products, their systems, or what they're a part of deploying is beneficial and not harmful. You can have a broader consumer base if you're thoughtful about for whom your AI is safe, for whom it has been tested.

On the flip side, it'll be hard to repair your reputation if it turns out that your AI is not safe or that it has damaged certain populations or can cause risk. And you can't overlook the liability which too many companies are not realizing.

Federal agencies in the US have put people on notice if you are using any AI in your employment system, in your HR systems. The EEOC has said, we have purview with our civil rights laws under those systems.
If you are in any way using AI that is violating a protected class within our purview, be on notice. We are on the lookout. And they've already started to bring some actions against age discrimination and so forth.

They've had joint historic statements. For instance, the EEOC and the Department of Justice came together and said, be on the lookout for the civil rights laws that you could be violating with your AI use.
The Americans with Disability Act is a key opportunity for you to make sure that you're being inclusive. And if you're not thinking about the fact that your AI use could be not inclusive, if you're not thinking about those who cannot hear on your audible AI systems, that cannot see and so are not able to access equally, you could be violating the Americans with Disability Act.

Another joint historic statement with the EEOC - it's the whole federal alphabet of organizations - the Department of Justice, the CFPB, the FTC, the Federal Trade Commission, they've all come out together and said, be on the lookout. They don't want people to be violating the law. They don't want there to be harms or liability.

So I think too many companies don't realize the liabilities. And if they do think there are liabilities, they think it's someone else's problem. They think the big tech companies will be the ones who are in violation. Well, again, the EEOC has been very specific. They don't think that their civil rights laws apply as well to those organizations unless it's for their own internal use.

The user is the one who will be found liable. So you can't just rely falsely on some other company to protect you that they've tested for trustworthy, safe AI. It has to be something you do if you're going to use AI.

KIMBERLY NEVALA: Yeah, so he who wields the tool is responsible for the outcome of the tool.

MIRIAM VOGEL: Absolutely.

KIMBERLY NEVALA: It's actually heartening to hear and to have seen the agencies really stepping up to enforce what, in a lot of cases, are just existing laws and regulations. Because for a while I've had this niggle in my stomach about whether transparency was going to become what I call the new terms and condition. Which is you effectively and essentially, you've got 16 pages that you scroll through, and then you sign any and all rights away, But really, no one understands what they are. And so it's not an informed consent. So this idea that companies would just say, hey, be aware that we're using an AI system. As long as we disclose we're using it, we're good. And as I said, to me, that means transparency becomes the "get out of jail free" card. It becomes the new T&C. Is that even a credible concern or not something to worry about?

MIRIAM VOGEL: Absolutely.

On the one hand, there's so much need for there to be clarity on what is baked in the AI system to have the nutrition label. On the other hand, we need to make sure that what is on that label is meaningful, and that it's updated, and that it's not a "one and done"-type of test because AI will iterate over time. The information will go stale and will need to be updated on whatever we've tested for.

So I think both are true. We need to have some clarity on what the standards are and what is shared with the end user. On the other hand, we have to make sure that there are meaningful standards and that different people along the AI life cycle have a requirement to update it.

I think the last piece that we should talk about at some point is also we need to make sure it's transparent for whom? So when I said end user, in your mind, there's many different ways you can answer that. Is it the end user in a "business to business"-type of situation? Is it us? If you're giving AI to a school district, you're going to want to have different levels of explainability, your transparency will need to be interpreted differently depending on for whom you intend for it to be transparent and explainable.

KIMBERLY NEVALA: Yeah, that's an important point. I'd probably underscore that and replay it on a loop. And it ties into the question I wanted to ask you. We talk a lot about or there's a lot of advocacy out there that says we need to have participatory governance. We need to start to think about stakeholders differently so that we are including those for whom this might fail and who it might be applied against, whether or not they recognize that.

I'm applying for a mortgage. I may or may not know that this is being used. It's being used in the school system in various ways. To inform what education I'm provided access to, how teachers view me, and so on and so forth. So this means that we're not just talking about the teacher using it, or the mortgage analyst or whoever it is but now going out into communities that are in fact the purported beneficiaries - but often the victims of - these systems.

Is there anyone today that's doing participatory governance in terms of external stakeholdership well?

MIRIAM VOGEL: Such a great question. I think it's the key question.

Something we always talk about is making sure that there's different demographic populations, that we have more diversity in who is part of the development, who's part of the deployment, who's part of the testing. I can think of examples of who is doing well. But I think another challenge we have right now is, what does it even mean to be doing it well? Diversity of whom, and for whom, and for what?

We need to do more work to fill in that gap. When we're saying we need to make sure that more people participate, we have ideas in our mind. But if we're thinking about who's not participating, there's many ways to answer that question. It's underserved populations. It's underrepresented populations. We have too few people of color participating in deployment of AI and building of AI. We also have age gaps. We don't have people above certain ages that are participating. Unfortunately, we have young people participating. I don't know if it's knowingly. They’re part of the training.

KIMBERLY NEVALA: The grand experiment.
[LAUGHTER]

MIRIAM VOGEL: Exactly. We're talking about region, geography. Too many apps are built for people on the coast, but we need to make sure that the middle of the country and every continent. There is some really interesting work being done in different continents but certainly not enough to make sure that we're talking about robust, inclusive deployment and building of AI system. So we actually need to do more work to define what it means to be doing this well.

KIMBERLY NEVALA: I can imagine, as we're speaking and people are listening, there's folks and organizations that are just backing up and going I can't take that on. I don't have the time, the energy, or even the scope to be able to address that level of some of those questions and that scope of the community.

MIRIAM VOGEL: And I would sympathize. I would say that it is a grand question.

No organization or person should have to take this on their own. I think this is a place where government, nonprofits, civil society should be helpful in defining what is something that we will define as safe, as useful, as even sufficient, if we're talking about broad participation.
But the good news is that all the research and my own personal experience shows the more companies make sure they have multistakeholder participation in their systems development, testing, deployment, the better off their AI is. It also impacts their culture.

My small experiment is with our badge program for senior executives. Originally, we wanted to create responsible AI governance for one type of position within an organization. We found that it actually exists in the responsibility of different titles and individuals in different organizations. So we've organically had this class which has multi-stakeholders. It's the legal counsel, chief privacy officer, chief data officer, AI. It's engineers, and physicists, and all different kinds of thinkers who are asking, what does it mean to responsibly govern AI? And as a result, each person who's participated has said they've really benefited from being part of that conversation.

So a really hard question to answer, but even in experimenting in how to answer that question I think companies will find great benefit.

KIMBERLY NEVALA: And it sounds like if you just start to build that network it will organically grow. People will opt in and they will bring other folks in. So it's not about starting with the optimal network and everyone linked in. It's getting a good core and then letting that grow organically.

MIRIAM VOGEL: Yeah, such a good point. It's a mindset. It's a mindset.

Too often if we're talking about AI, we're talking about computer scientists and engineers. Or let the lawyers handle it. And that has to be, "yes, and". We can't ask one type of population or profession to solve this for all of this. This is going to require all hands-on deck.

So it's a mindset of wanting greater participation, wanting different perspectives of all different types in the process of evaluating how your AI systems should be functioning and making sure that they comport with your values as an organization. And in thinking about it in those inclusive ways, you'll naturally come up with a multistakeholder program that will have multiple benefits again, from the products that you're building to the trust that you're building.

KIMBERLY NEVALA: Now, back in, I think, about 2021, you wrote an article that I found recently that was a call for more active engagement at the national level - both in terms of policy and regulation. I was going to say it foreshadows a lot of the work going on right now. But looking back at it, in fact, it's a pretty good roadmap for what's being laid out. So it seems like if you were not at least an active architect, you were certainly a key influencer in doing that (work).

In your work today in NAIAC, with what you're seeing with EqualAI, what are the most promising aspects of the regulatory regime and policies that are being created today?

MIRIAM VOGEL: I think people can take great comfort when they look at what's been happening in the Federal Government in the US over the past few years. As we talked about, federal agencies have all come out and talked about how they're thinking about their own AI use and how they're regulating.

We've had this executive order come out last October that is the most sweeping in history. Which is interesting that they don't like describing it as such, but it is. So I'd rather call it what it is. There's 150 assignments to over 50 federal and quasi-government agencies. It touches every part of government that could benefit, safeguard, interact with AI. It's a starting point. We have the 120-day deliverables coming up in the next two weeks, so we'll hear a little bit more very soon about where these are. There are very tight deadlines for a lot of this work.

But each of those were the starting point. It was a game plan. It was an action plan. It was building a task force. It was making sure that equities were underway. But there's a lot of follow up. And we all need to make sure that the Federal Government continues to lead in this way, continues to make sure that they're using AI to create efficiency and that they're testing it to make sure that it's not having downsides and discriminatory effects that are not intended.

KIMBERLY NEVALA: So there was a recent executive order - I think it was the executive order, you'll correct me if I'm wrong - dictating that all of the key agencies appoint chief AI officers. Is that a good move? What does that help us hedge against or move forward? Are there any downsides to that?

MIRIAM VOGEL: I think it's a great development.

You mentioned the National AI Advisory Committee that I'm privileged to chair. That was one of our recommendations to make sure there's more robust leadership across the federal agencies. There's more capacity within the federal agencies as well as within the White House. So we were very pleased to see that both of those things happened. They created a task force that reports up to the president that has interagency support. That's been very active. And in each agency, there's a designated point of contact.

It comes back to, again, good AI hygiene. You need to have clear accountability. You also need clear systems and you need them communicated enterprise wide. When we're talking about the Federal Government it's very challenging but much more feasible if you have clear points of contact with the capacity to lead on these efforts, and to coordinate, and to know with whom they should be coordinating. So it's a very significant positive development that this point of contact within each agency now knows that they exist. Everyone in the agency knows that they exist if they have a question or a concern. And interagency they know who to be coordinating with. As well as for the White House; when they need to be spreading information out or gathering information, there's a clear point of contact throughout the agencies.

KIMBERLY NEVALA: Outside of things we might have already touched on, are there any particularly pernicious or aggravating, to you, misconceptions or misperceptions about the role of regulation and policy?
MIRIAM VOGEL: I think regulation is seen as either such a boring or bad word.
And certainly, if you look at all the regulations and if someone is talking about tax day and going through filing your taxes, it's not going to make you so pleased about more onerous requirements, or more questions, or more data to be deciphering and filling out.

But if we think about regulation as a way to build trust and as a way to spur innovation, I think that's really the way we need to be thinking about it.

Fortunately, across the Federal Government, there is a lot of that thought right now. And across the globe a lot of different countries are thinking about how can they be a home for spurring innovation through AI. I think the opportunity is not lost on many countries as to how they need to benefit but that there needs to be safeguards in place.

I say I'm AI net positive. I am excited about how this can democratize opportunity. But we have to be careful to make sure that it's actually doing that, not scaling harms and discrimination.

KIMBERLY NEVALA: So is it fair to sum up in the simplest terms possible that regulation is not there to inhibit innovation, it's there to inhibit harm?

MIRIAM VOGEL: Like AI, regulation is a tool.

In the best case, it's used to build trust. It's built to ensure safety. It's built to ensure clarity and understand what the expectations are for the consumer, for the producer, for the end user, whoever that is. As well as for the government to know what their oversight and function should be.

So I think it can be onerous. It can be overbearing. But it's not in and of itself a bad thing. And, often, in so many domains, it is something that we benefit from.

KIMBERLY NEVALA: Now, we talk about literacy. How do we make the general public literate and understand AI - a term we should never use when we're talking about AI strategy, roadmaps, literacy, or otherwise because it is so broad and everyone's definition of that is a little bit different.
You're on NAIAC. I said that right, right?

MIRIAM VOGEL: You did.

KIMBERLY NEVALA: Which is there to advise the government, certainly. But I would imagine that there's work to be done to make sure that policymakers and agency staffers also have a foundational level of understanding and literacy. Because organizations like NAIAC or even having a chief AI officer can contribute. But you're not there every day, every minute. You're not there helping intermediate when the techno-optimist says, “it's all good, back off” and on the other side someone's saying, “it's all going to -- in a handbasket right now, stop it all.”
So how do you assess the current level of literacy at the policy-making level? And what needs to be done to raise it, if necessary?

MIRIAM VOGEL: The good news is there's a lot more AI literacy, sophistication, and awareness than a few years ago across the board. And in the US Government in particular, if you think about Congress.

A few years ago, very few, I actually had a member in a congressional hearing ask me about A1. So we've come a long way since then. We have seen the Congressional Insight Forum. We've seen hearings in almost every committee repeatedly on AI where they're trying to get smart. They're trying to understand what their role should be and what's happening in this space. And we've mentioned some of the ways across the Federal Government, where there's so much activity and awareness.

I think, overall, there's been a huge upgrade in awareness and sophistication. But you're right. There's so much more that needs to be done.

If you're talking about procurement, all the procurement officers, everyone in the procurement pipeline, they're buying systems that either could be powered by AI or that are AI systems. They need to be knowing the questions to be asking. And if they get an answer, they need to know how to push back to make sure that they both were operating with the same definition, that the answer was clear to them, and what's the follow-up question? They need to have a baseline of AI literacy and awareness in order to be able to have the confidence and ability to ask those questions and follow-up questions. And then to have meaning they need a supervisor who, if they flag a concern, can understand why it's a concern and take the right action knowing it's a meaningful issue that should not be overlooked or brushed aside.

So what needs to happen in the executive order? There were calls for AI literacy. There were calls for plans to educate those in different areas where they're going to be touching AI and AI policy, in particular. The OMB memo that came out in conjunction in draft form and now is finalized from the White House also talks about operationalizing a lot of AI policy internally across the federal executive branch.

There are a lot of calls for education, but we need more. We need more resources, and we need more opportunity and mechanisms to make sure that our federal workforce, our workforce, our kids have more access to AI education. And not to say they have to be computer scientists.

KIMBERLY NEVALA: No.

MIRIAM VOGEL: That is not at all what we're saying. But an awareness that the system that they're using has AI in it, and what that could mean, and how to test it to make sure that it's safe and effective for them.

KIMBERLY NEVALA: And to decide when to engage with it and when to not, for that matter.

MIRIAM VOGEL: Absolutely.
KIMBERLY NEVALA: I would imagine that should be a pillar of what every chief AI officer, whether they're in a corporate role or in a governmental agency role, is doing: developing a sustained, ongoing program around literacy, around engagement. And again, the technology is changing fast so that that's going to be sustained and maintained over time.

MIRIAM VOGEL: Yeah, and there are some good resources out there. Some that nonprofits have put out but also right within the government.

The legislation that mandated the NAIAC also mandated that NIST, the National Institute of Standards and Technology, at the Commerce Department create a risk management framework for AI. So they've put together this really important document last January. It's online. To make sure that it is user friendly, they also have a playbook so that people understand how to use it. It really compiles best practices starting with the question of ‘is AI even the right solution for this question or problem?’, understanding the different stages of the AI life cycle, and the different questions that need to be asked. So there are now resources out there, where Federal Government and other actors can benefit and level set on best practices that have been assembled by this body and by others.

KIMBERLY NEVALA: You've been very generous with your time and your knowledge. So I will ask just one more question which is: what is the question that I - or others like me or people in your broader travels - are not asking that you wish they would and why?

MIRIAM VOGEL: The question that I think people should be asking is ‘How can AI help me do more, do better’? And ‘was this AI system created for this use and this user, me’?

I think if we all ask those questions, how can it benefit me, for whom was this created then for whom could this fail, will then be more apparent. And we can understand when it's safe to be relying on AI and when it's not. When we can demand more of our AI systems and how all of us across the globe, in all different regions, of different ages, of different nationality, race, ethnicity, can benefit, regardless of all those demographics.

We should all be able to use this opportunity. What is it going to take to make sure that we can use it to help us thrive?

KIMBERLY NEVALA: That is a fantastic call to action and an insight to end on. I really appreciate your time and all of your insights. You've been very generous.

MIRIAM VOGEL: Thank you for having me.

KIMBERLY NEVALA: If you'd like to continue learning from leaders such as Miriam, please subscribe and listen in. And, as I said, we'll now be available on YouTube, maybe, we'll see how this goes… [LAUGHTER]