Purpose 360 with Carol Cone

Artificial intelligence has the power to reshape economies, societies, and our daily lives. But with its rapid rise comes an important question: how can we ensure AI is developed and applied ethically so that it serves humanity instead of harming it? Responsible use requires transparency, accountability, and inclusivity—but defining and implementing these is complex. JUST Capital, a nonprofit dedicated to advancing just business practices, is addressing this challenge by exploring what “just AI” looks like, while also giving both the public and companies a voice in shaping its future.
We invited Martin Whittaker, CEO of JUST Capital, to speak about how companies can responsibly navigate the opportunities and risks of AI. He highlighted the importance of aligning AI strategies with company values, building strong governance, and listening to stakeholders to guide ethical decision-making. Martin also shared insights from JUST Capital’s new research, which reveals a gap between companies acknowledging AI and those taking meaningful steps, such as workforce training and transparency. He ultimately challenges business leaders to reflect on what it means to be a truly human company in an AI-driven world while assuming the responsibility that comes with this technology.
Listen for insights on:
  • How AI layoffs may require new ethical standards and practices
  • Why company culture determines success in AI adoption and use
  • Lessons from early leaders like IBM and Boston Scientific
  • The growing role of investors in shaping AI accountability
Resources + Links:
  • (00:00) - Welcome to Purpose 360
  • (00:13) - Martin Whittaker, JUST Capital, and AI
  • (02:40) - Who Is JUST Capital?
  • (03:33) - Describing Justness
  • (04:44) - Responsible AI
  • (08:25) - Early Measure of Just AI
  • (11:12) - Martin’s AI Usage
  • (12:49) - AI Use Principles
  • (14:58) - AI Study
  • (17:04) - What Stood Out
  • (21:44) - Adding AI Methodology
  • (24:27) - Advice for Companies Slow to Adopt AI
  • (26:38) - Last Thoughts
  • (28:15) - Can AI Replace Humanity in Business?
  • (29:57) - Wrap Up

What is Purpose 360 with Carol Cone?

Business is an unlikely hero: a force for good working to solve society's most pressing challenges, while boosting bottom line. This is social purpose at work. And it's a dynamic journey. Purpose 360 is a masterclass in unlocking the power of social purpose to ignite business and social impact. Host Carol Cone brings decades of social impact expertise and a 360-degree view of integrating social purpose into an organization into unfiltered conversations that illuminate today's big challenges and bigger ideas.

Carol Cone:
I'm Carol Cone, and welcome to Purpose 360, the podcast that unlocks the power of purpose to ignite business and social impact.

Artificial intelligence is often described as the next great industrial revolution, an innovation that could simultaneously reshape economies, societies, and even national security. In July 2025, the US launched America's AI Action Plan with three pillars, innovation, infrastructure, and international leadership. The vision, to create an era of human flourishing, competitiveness, and security. The stakes, incredibly high and the opportunity, profound, but it raises a critical question, "As AI accelerates, how can it be developed and applied responsibly?" To explore this, I'm joined by one of Purpose 360's favorite guests, Martin Whittaker, CEO of JUST Capital. JUST Capital is the leading not-for-profit, committed to making the case for just business, demonstrating through research, reports, thought leadership the value of stakeholder capitalism and leveraging private capital for positive social change. Its mission is to empower investors, business leaders, and consumers to make more informed decisions, steering capital towards companies that prioritize the public good. It's fitting that JUST Capital weighs in on the challenges and opportunities for AI to be just as it continues into rapid adoption into business, the economy, and daily life.

In this conversation, Martin and I will explore, "What is just AI? The principles and standards to ensure AI is fair, accountable, reliable, safe, and inclusive, what are they? How are they evolving?" And, ultimately, the big question, 'Can AI ever replace humanity in the work that we do?'" It's a timely discussion on the intersection of technology, ethics, and purpose, so let's begin.

Martin Whittaker:
Thank you, Carol.

Carol Cone:
Well, thank you very much. Anybody I meet, I always say, "You must analyze JUST Capital. Listen to their podcast. Look at their research, it's great." Can you just, at a high level, explain who JUST Capital is and what's the company's purpose, your organization's purpose?

Martin Whittaker:
Sure. We're a New York-based 501(c)(3) nonprofit. Our big mission is really to try and get big corporations, the private sector, to do more on big societal challenges. We feel as though without really the markets and private sector leaning in to create a more just future for everybody, government and philanthropy can't really get the job done by themselves. You need markets to do that. Companies have a lot of incentives to do that because being just, being excellent at stakeholder performance is tied completely to market performance, competitive performance, and ultimately financial success.

Carol Cone:
Yeah, and I love the way... Can you talk about how you describe justness from the surveys you have done with the American public?

Martin Whittaker:
Exactly. We don't define what just means. We don't define what issues matter. We put our faith in the American people, so we do, and we always have, this is going back 11 years now, extensive survey in public opinion research to really take the pulse of Main Street America, Carol, on a fully representative basis, to ask them, "What do they think about corporate America? How do they define this idea of a just company? What issues matter?" Americans are pretty much united on what they expect of corporate America, how they define just, what the issues are, and what they want companies to do." I see us as a beacon for really championing the best of American business. That's how I think about it.

Carol Cone:
That's why, from the very beginning when I saw your research, I was like, "I got to get to know this guy and invite him into our family and our community," so thank you very much.

Today, we're going to talk about artificial intelligence and its responsible use, and I don't think there's any better organization really in the country, the world, to talk about its responsible use. But let's just start with the definition, you may or may not agree with it, so we'll talk about that, of what responsible AI is. Responsible AI is a set of principles, standards, and practices for delivering and deploying artificial intelligence systems that are fair, transparent, accountable, reliable, safe, and secure, really important to be secure, while also respecting privacy and being inclusive. Responsible AI aims to ensure AI systems are designed and used to maximize benefits for humanity, and I love that humanity is in this definition, and society while minimizing potential harms or risks. Do you agree with this or do you have a modification of this?

Martin Whittaker:
I agree with the thrust of that definition and a lot of those words. I'll point out a few things though, just I'm hearing that for the first time, obviously. Some of those words are subjective. What do you mean by fair? Fair to whom? What does inclusive mean? Would that depend on your point of view? Everybody, is that some people? Some of those terms, just in our experience at JUST Capital, because we dealt with language-

Carol Cone:
A lot of language, yeah.

Martin Whittaker:
... a lot when it comes to issues, I think there's some room for interpretation on some of those things. Second set of observations is, how do you actually do that? Whose responsibility is it to advance that? Is that a government? Does government decide what's in the best interest of, what did you say, humanity and society?

Carol Cone:
And society, right.

Martin Whittaker:
Well, who decides that? Who gets to decide? How you get to that, how you go in that direction I think is an issue. I'll give you this. This issue in one sense reminds me a little bit of climate change insofar as it's a big issue that affects everybody. People have very different points of view on its severity, its urgency, how it's going to affect them individually and what to do about it. Is it government's responsibility or is it the private sector or is it someone else or is it... There's a lot of lessons learned from that, for me, which we can go into if you want, but I think a lot of the definitional elements that you just described show up in the survey work that we've done and that we're about to do. I'll talk about that. They show up in other surveys where you see a lot of fear, a lot of concern about social stability, safety, security. Are we going to repeat the mistakes we've made with basically unfettered social media use where regulation was non-existent or very light? A lot of detrimental impacts.

There's a lot of examples of well-intended... the good intentions to try and create that, but how do you actually do it and what do those things really mean at the end of the day?

Carol Cone:
Let's now pivot to the early measure of just AI. Why was that one of the four key topics for JUST Capital this year and why is it critical to publish it?

Martin Whittaker:
It's one of the four key topics because it's probably the overwhelming issue that we're grappling with right now. What does just AI deployment, development... What does it even mean? It is an issue that is just coming at us, barreling towards us at 100 miles an hour. Every day, there's a new report or survey that would describe the impacts, the risks, but also the opportunities. There's a lot happening. We at JUST Capital decided we got to get our arms around this. This is going to affect the role of corporations in society. It's going to affect how people think about what even is a just society and a just company. It affects everything, Carol. That's been a recognition that we've had over the last 12 months, and we're tackling in lots of different ways, and this is one of them.

One of the things... At the top of the show, I said, "We take our cues from the public." We're going to do a lot of survey work on how different stakeholders think about responsible AI. I'll talk about that in a second. One of the other pieces of the puzzle for JUST Capital is, what are companies actually doing? What are they disclosing? And then we'll start to look at, "Okay, how does that match up with other things they're doing? How does that match up with their overall performance in the just indices, in the JUST 100?

Carol Cone:
With the JUST 100, you review information in the marketplace. Let me get that right. They don't enter, right?

Martin Whittaker:
Exactly. They don't enter. We choose to analyze companies in the Russell 1000. That's our base universe-

Carol Cone:
Based on the knowledge in the marketplace.

Martin Whittaker:
... and we are going to expand that. We are developing a toolkit that any company can use, not just ones that we track. But anyway, that's our coverage universe. This issue is going to transcend everything we measure. It's going to affect product safety, it's going to affect workforce issues, compensation issues, it's going to affect training, which we can talk about, it's going to affect customer experience, safety, you name it. There's some element of AI. We got to get our hands around that, and we will. It's not going away. We're going to have a lot more podcasts like this where we talk about what AI means to our worldview.

Carol Cone:
That's terrific. So I want to turn it to you personally. How have you started to embrace using AI, not your personal life, that's your personal life, but in your work life?

Martin Whittaker:
In my own experience, the first thing you got to do is start becoming a user yourself every day. I got a note one day from our former chairman who said, "The most important thing you're going to do is listen to this podcast," and it was a podcast about... I think it was Ethan Mollick from Penn talking about AI in the workplace, and he was asked the question, "What's the one thing you would advise a CEO to do right now?" This is a few months old at this point, and he said, "You got to start using it yourself and you got to develop a game plan for yourself." So that's what I did.

I developed a game plan. I had my team put together a cheat sheet for courses I got to take, evenings and weekends. That is my personal life right now. I got to take these courses, I got to get familiar with all the different approaches, the different applications, the different apps, different uses. I got to figure out how it works for me on a day-to-day basis, and I got to go on a journey, and I did that and I'm still on that obviously. It's helped me become a lot more just discerning and realistic about what AI can and cannot do, what it really is, how it can help me with my day-to-day, but also what it means for our company and what it means for the companies that we track.

Carol Cone:
Right. Now, do you have responsible AI use principles for your company?

Martin Whittaker:
We're developing them. One of the first things I did was establish an AI user group within the firm and appoint someone to chair that, Molly Stutzman, if Molly's watching. We have a group of folks who are coordinating that, sharing ideas, sharing new developments, so they come across useful. We've just started, for example, to create a database of AI surveys, third-party AI surveys, that are out there. Our head of survey work, Jennifer Tonti, created that. Essentially, it's tracking as close to real time as we can what other surveys are saying about AI and we're using AI to compile that and analyze it. That's one example, but we do that across the firm. That's one thing, user group.

We have a sort of a game plan that is both internal and also external as it relates to our strategy. How do we think about our strategic priorities on this issue? We have an external AI advisory council, superb people on that who can guide us, give us advice. So it's multi-layer, Carol. Yeah, I only think it's going to be become more important and more universal, more ubiquitous in our day-to-day.

Carol Cone:
Yeah, I'm glad you said that. Just in all transparency, I have appointed one of our partners who was insanely curious and brilliant, so she's heading up our AI analysis journey training for all of our people. I use it at least two or three or four times a day, but we also have to balance it. We always say that you've got to have the human review. You've got to make sure the content is correct because hallucinations still happen and you get some strange responses no matter what you're doing.

Martin Whittaker:
You've got to have human in the loop, as you say.

Carol Cone:
Let's go back because I want to go back to what you have done with Sofia Maria, and what did you find out initially from studying AI this past summer.

Martin Whittaker:
Well, we found there was a significant disclosure level. So 84% of our universe, the Russell 1000, mentioned AI publicly on their website or in their financial reports. We were very focused on internal talent development and training. Only 20% disclosed on that topic. Disclosures, right now, I would characterize as being all over the place. There's no clear concentration of the nature of disclosure. It's sort of mentioned. What Sofia found was that there was this gap between general disclosure without any specificity and only a smaller number, as I said, 20% of those companies, were disclosing around workforce training initiatives, which is interesting because the most prevalent type of AI application in companies right now, I think, is on worker training and productivity-based issues.

What I hear most from board members and from business leaders is, "That's great, but improving efficiency only gets you so far, What's the real business case? When are we going to see a return on our investments that is around growth, that is around smarter products, more sustainable competitive advantages that have been created and supported by AI?" The examples of those, unless you're the actual developers themselves, the Magnificent Seven and the amount of money they're putting in. But even there, investors still want to know that there's going to be a return at the end of the day. I think that's the big question mark right now.

Carol Cone:
Okay, so you identified a group of companies that were, interestingly, in 13 different industries, from Hershey to Visa to IBM, Salesforce. What did you find that they were doing right, so to speak? What were their best practices that they really stood out from others?

Martin Whittaker:
Mostly, it was around very specific and clear programs that were tied to, let's say, their business or their natural competitive advantages. IBM, for example, as one of the companies that we highlighted, their Watsonx challenge is essentially leveraging their technology and data capabilities to advance broader learning within their company and also new revenue lines. Full disclosure, we have a relationship with IBM... now, we're testing WatsonX and it's helping us with our own chatbot and our own modeling.

But you see training applied in particular company circumstances. S&P Global stood out for that in their disclosures. Another example was Boston Scientific, which is, in the report, their GenAI academy, again targeted training, targeted upscaling, targeted development and learning within the company on essential elements of their business. DayForce also stood out. So the reports on our website, there's examples there. There are other examples that are provided. I would say companies are early on their journey, Carol. These are sort of initial examples of the first few baby steps companies are willing to kind of talk about.

Carol Cone:
Good. Besides the report, do you have any more informal cohorts that you have created with some of your just companies who are truly trying to expedite their learning?

Martin Whittaker:
We're developing that right now. We have a two-pronged initiative underway at the moment. The first is a major survey initiative to essentially poll the public as consumers and workers, to poll investors who we will be polling in partnership with Robin Hood Foundation in October, a couple of thousand investors there, and then to survey corporate leaders with a consistent set of questions around responsible AI, and that will tell us where there's areas of alignment and agreement and where there's tension, where there's disagreement or maybe even opposing interest and opposing forces. So that'll give us a really good sense of how different essential corporate stakeholders think about AI.

And then the second part of our current strategy is to begin to track what companies are doing, and that speaks to Sofia's work and other work. We're tracking that. The last thing on this was sort of... I very loathe to rush in judgment, Carol, about what is just and what is unjust. I think it's tempting, but we don't want to make that mistake. "Is a company laying people off just or unjust?" We've actually asked that question several times before the public and it may surprise you, but the answer was, "Well, it depends."

Layoffs per se by themselves are not unjust. That's part and parcel of free market economy. How you do it can be more just or more unjust, so there is a playbook for a just layoff. And then you look at AI and you say, "What's different about AI layoffs?" while we're on that subject, "Are they any different? Should those laid off because of AI deployment by a company or adoption, should they be treated differently on the way out? Should they have some additional severance because they've been laid off due to AI and not some other issue?" These are really key questions.

Carol Cone:
It's an interesting question. You wonder if there's going to be this over-indexing on layoffs in response to the new thing, AI, which is not a thing, it's our future. I'm curious, and I know that you said you're going to add an AI methodology analysis to the JUST 100, do you have any sense now of what that might look like and will it supplant?

Martin Whittaker:
We have our five stakeholder model, as you know. So that's workforce or employees, customers, communities where companies operate, including their supply chains overseas, the environment, and then their shareholders and their governance structures, you can add the corporation itself as a sixth stakeholder, but that's our model. And then our first sort of framework question is, what does unjust and just behavior look like across each of those stakeholders when it comes to how a company is using or deploying AI?

First, it comes with the judgments around what the stakeholders think about those things. Then comes the tracking of the companies, which we'll do very transparently. And then comes the judgment around, "Okay. What does just AI behavior look like when it comes to a company's relationships with its workforce, with its customers?" I imagine things like... A lot of the things we see every year, things like data privacy, customers are very focused on that, very focused on transparency, very focused on ethical leadership, very focused on compensation and pay, these are the fundamental questions. I don't think those are going to change. If you take our number 1 issue from last year, which is really around fair pay, what does fair pay mean in an AI world?

That's a really interesting question. Does that mean getting paid the same as my peers? Does that mean I feel like a certain group of people are getting disproportionately rewarded? It leads us to ask really interesting questions, which I did in my newsletter this week of, "Is there any obligation for companies that experience tremendous increase in market value laying people off? Do they have any obligation to those folks who've been laid off or to society at large?" If not, we might see increasingly concentrated levels of wealth inequality. That doesn't go well with a country where, if some predictions are true, you're seeing 10, 15% unemployment. These are really critical questions that we got to get [inaudible 00:37:06].

Carol Cone:
Absolutely. I am curious about companies that are the laggards. They're still large, but they're laggards in terms of going on their AI journey. What advice do you have for companies who are beginning to get really serious about this, but they really haven't gone far yet?

Martin Whittaker:
Start at the top down. Conduct full top-down review of who are we, what are we doing, what's the future look like in our industry, what does it look like for us and how does AI get embedded and integrated into that, where does that accelerate, where does it create risk, where does it create opportunity, and how do we as a company and in terms of our operations need to adapt. That's point 1.

Point 2 would be you got to establish the strong governance processes that you described earlier, comes back to your values, how do we think about AI being used at our company that is in sync with the company we are and the kind of company we want to be, the way we show up in the world. And then, operationally, how do I drive that down to everybody at the firm? In a larger scale, it would be doing what we did at JUST Capital, forming a group. You'll have some super users there. You'll have people like me and other folks who are still losing the training wheels on it. People have to be, by the way, not embarrassed to say, "I'm not really using AI as much as I need to. Help. Help me. What do I need to do?" People got to be willing to put their hand up and say, "Help me be better in my job. How do I do that?" That speaks to your culture.

As a company, you've got to have a culture that will allow for that. So that's where I would go, top down, full 360 governance procedures. So you've got the norms, you've got the framing, and then it's all about execution.

Carol Cone:
It is. When are we going to see your JUST Capital chatbot? When do you think?

Martin Whittaker:
Q1, next year.

Carol Cone:
That's great. Yeah, it's a very interesting and new world for sure. I always love our conversations. I learned so much. What haven't I asked? What haven't we discussed, at least on this one?

Martin Whittaker:
I think what's different about this, Carol, and all the other things you mentioned and anything I've seen is it sort of strikes to the very beating heart of what it means to be a human company. What does it mean? Are you and I going to be obsolete in a decade and we'll have a Carol Cone AI speak to Martin Whittaker AI? We'll be off doing something else. It's existential, and that is what is different here. I think you ask yourselves a deeper set of questions. I like that. I like that it requires reflection. I think if you get comfortable with that reflection, you start to get a little more optimistic. It's easy to be beaten up by a sense that this is sort of overwhelming thing and we're on a worrying path. The more familiar one can become and the more fundamental questions you're willing to ask of yourself... Certainly, I've asked a lot of myself around this, "What kind of a leader am I? Am I really going to step up? How are my team?" Those are great key challenges.

Carol Cone:
Absolutely, you can't coast now. You've got to really lean in. I wanted to ask this question, which is that, can AI replace the humanity in business? Can AI have the empathy and the humanity that we, as human beings, bring into our lives and to our business relationships? Can it replace it?

Martin Whittaker:
My sense of that, it depends on who you ask. I listened to a lot of content and podcasts and people looking out over a generation and people looking forward to six months. I think you could say it's possible, but one likes to think that, like any technology, we can use it for good or for ill and we can use it to improve the human condition, make us more conscious and more aware and more human, or not. I don't know, maybe the jury's out is the best way to say that.

Carol Cone:
But the question is, will it have the empathy so, if a young person is really having second thoughts, that it's going to help them, which is-

Martin Whittaker:
Yeah, I know why you're asking that. We've seen that in the news. Here's a question that flows from that, which is, if a large language model results in human harm, who is liable?

Carol Cone:
And that's going to go to the courts. It's happening now.

Martin Whittaker:
Right. So you might get a legal answer to your question, but I don't know if you'll get a moral one.

Carol Cone:
True.

I want to thank you for this discussion. This is a great time for JUST Capital. It really is because I know we've talked a lot about how are you evolving and the positioning and et cetera in a world that is incredibly noisy, and this is-

Martin Whittaker:
Can I just say, Carol, just on that point?

Carol Cone:
Yes.

Martin Whittaker:
If anyone watches this or listens to this and wants to partner with us, support us, get involved, just reach out to us. We're trying to do this through partnerships. We're open for business on this subject. This is so important. So if you want to donate to us, we're a nonprofit. If you want to help us, you want to support us, get involved, use our data. If you're running a company and you want to use our Just Intelligence platform, reach out.

Carol Cone:
That's a great end note at this point in this conversation. As always, Martin, it's great, great to have you on the show.

Martin Whittaker:
Look forward to it. Thank you, Carol.

Carol Cone:
This podcast was brought to you by some amazing people and I'd love to thank them, Anne Hundertmark and Kristin Kenney at Carol Cone on Purpose, Pete Wright and Andy Nelson, our crack production team at TruStory FM, and you, our listener. Please, rate and rank us because we really want to be as high as possible as one of the top business podcasts available so that we can continue exploring together the importance and the activation of authentic purpose. Thanks so much for listening.

This transcript was exported on Sep 09, 2025 - view latest version here.

p360_209 RAW (Completed 09/09/25)
Transcript by Rev.com
Page 1 of 2