The New CCO

In this 3-episode mini-series, we sit down with three CommTech experts, discussing what generative AI technologies currently exist, how they are developing, and the implications for the communication profession. In the final episode of the series, we look at what current systems and guardrails are in place for this technology, identify areas of concern, and prognosticate what may happen if these concerns aren’t addressed.

What is The New CCO?

The New CCO podcast from Page tells stories that explore the evolution of the CCO. From culture change to digital transformation to corporate purpose, we focus on the issues that matter to today's communications leaders.

Page is the world’s premier membership organization for chief communications officers, PR agency CEOs and educators who seek to enrich the profession and improve corporate reputation management.

Speaker 1 (00:00):
In our last two episodes, the first two of a three-part series, we began a conversation with John Aada, Brittany Paxman and Dan Nessel about generative ai, what it is and why it matters to our profession. Today, we're wrapping up with our discussion on its implications for our profession and for the world at large. If you haven't already listened to the first two episodes, I suggest checking those out first. And if you have, great, let's get to it. I'm Elliot Mhi, and this is the new c c o. Rivett 360 has been working with Paige to bring you the new c c O from more than six years, and that goes way beyond just editing and production. They're true thought partners, helping us develop our show's unique voice and identity, brainstorm ideas and tell well riveting stories. To me, that's what makes them and our show so special. They're storytellers, first and foremost, and as communicators, I know we can all appreciate the value of a story well told. So if you're thinking about launching a podcast or you have one that needs some fresh ideas, visit rivett three sixty.com to book a free consultation.

Speaker 1 (01:08):
I want to try to put on the prognosticators glasses here. Um, we're still really early days in ai, like I feel like it's only been a matter of months since we've been tinkering around with chat. G P T and the speed of change makes it difficult to feel like you can predict even a year or two out, right? Because who knows what's coming. But I know each of you thinks often and deeply about where this is all going, and I want to ask you what your thoughts are on that. If you could look out three years, four years, and in particular at the future of our profession, what do you foresee?

Speaker 2 (01:52):
It's very difficult to see what's gonna happen three, four years, five years from now. Uh, but I, my gut, I suppose, or just what I'm, what I'm seeing, uh, among marketers and communicators who are using these technologies, who are adopting AI, is this kind of creation of almost the, I think the, everyone's a Batman or Bat Girl. Like everybody has this utility belt now of these incredible tools and, and capabilities available to them. But it's useless without, without having the muscle behind it. I mean, you know, so looking at all the kind of tools and the, the, the, um, technologies that are ready and, and have zero barrier to entry, everybody now has the opportunity to kind of evolve and develop their career or their skillset in the develop, in the area that they want to. So I think the future of our profession is, is kind of, um, I'm very optimistic.

Speaker 2 (02:51):
I, I think that those who embrace the, the creative, at least the creative side of these tools and who, who start to say, well, you know what? I was ever a research person, but look at this. I now have code interpreter. That's opening up a whole lot of new territory for me. You know, and every time that happens, that having that, as Brittany said, that, that that learning skill, you know, we can create a whole new set of roles and a whole new set of, of, of, of kind of services and ways to, to, to serve our, our stakeholders. That said, it definitely, uh, doesn't bode well for numbers, I think, right? So there's a lot of folks in the profession. There's a lot of people who come in who get into communications, um, at an early stage. They're good writers. They, they, they're, they're inquisitive.

Speaker 2 (03:41):
Maybe they're journalists, maybe they wanna be journalists, and they understand the journalist profession isn't, is, is very difficult to break through. Now. They switch over to agency. Those agency jobs are disappearing at that level. So there's a great hollowing out happening at those lower levels, I believe. And we're gonna have to figure out how to cope with that. So at the senior levels and the upper levels, people have already kind of gotten to this point of understanding, or at least feeling like we understand the environment more. Um, we can interpret, we have a, we have a, a, a very big basis of knowledge where we can make decisions and interpretations based on what we're seeing, and then we can figure out new ways to guide and ju and, and lead these tools. But if you've never, uh, seen or written an annual report, how do you know how to instruct an AI to help you create an annual report?

Speaker 2 (04:25):
You just don't. So what are we gonna do? There's, there's a whole education piece and, uh, a a, I think a, a new kind of a new look we need to take at what are the, what's the lifeblood of our profession? What does the, the pipeline look like, and what is the tolerance for learning, um, before we demand so much production. Um, these are questions that need to be answered. So, you know, that I think, I think it's a wide open kind of question, a wide open future, but everything, it is going to change and we need to be ahead of it.

Speaker 3 (04:59):
In terms of the future of our profession, I think that there are gonna be increased demands, increased, uh, expectations around, uh, writing around design and around stakeholder management. I think stakeholders will expect more and better, more transparent communication, and they think that these tools can enable it in terms of what it means for the people in the profession. My bias, and this has been my bias for a long time, is I'm, I think it's a, I think it means that generalists will be really important and more valuable, uh, because, and we are in a world of specialists, especially the marketing profession has really moved towards specialization. I mean, you see some of these job descriptions and it's like, we need a social media, paid media buyer for Instagram in the mornings, right? It's like very specific roles. And that early specialization, I think creates a lot of silos and problems and this type of technology.

Speaker 3 (06:10):
And just in general, the proliferation of technology and the advancement of technology means that people that have a wider aperture have the opportunity to make connections that people who maybe have a more narrow role won't see. And so I think for the communications has always been more of a generalist profession than say, marketing. And I think that potentially means that communication has, has a really big opportunity in this moment to see connections because we have this skillset that other functions in the enterprise don't have, and that's that skillset of drawing connections. Um, so I wanna do a shout out to a book called Range Why Specialist Triumph, or Generalist Triumph in a Specialized World by David. Um, Epstein, I think it's really excellent. And anyone who's managing people or coaching or trying to develop people, I, I would highly recommend it because it's really about why that broader aperture and the ability to be nimble is so important to, uh, success.

Speaker 4 (07:14):
If it's true that AI is, uh, represents a level of enterprise transformation as the internet did, then the future requires that CCOs as a member of the C-Suite, um, argue that point of view with, um, uh, with their peers and with the C E O, that, um, yes, we're gonna get some back office, uh, productivity, uh, or rather just some productivity and automation from the, from ai. But are we thinking about this as an opportunity or threat to the core business model? And are we looking broadly across the front and back offices to ensure that whether it's HR or the Treasury department or sales, every part of our enterprise is taking advantage, if possible, from the new technology? So within the function, you know, um, we, we've heard some great insights here about how it's gonna change communications, but I think as for the C C O specifically, they should play that role of, of bringing that point of view into the executive committee so they don't miss it, right?

Speaker 4 (08:23):
They don't miss it the way so many missed the internet as we previously discussed. Um, secondly, specific to communications, it's, it's difficult, you know, did anybody, when they saw Steve Jobs stand on stage holding up the iPhone, say, well, there go cab drivers, um, uh, no. Right? Did anybody anticipate Airbnb, Spotify, and the rest? No, but we have the benefit of that. You see, we had the benefit of saying that while we can't anticipate the specific, we need to be on the lookout for how there are like secular shifts here, specific to communications. Could we have anticipated that social media and the internet would disintermediate traditional media and, and, and introduce at least two new things that we take for granted in communications practice today, directly going with our messages and content directly going, and not having to rely on the gatekeepers to tell our story.

Speaker 4 (09:29):
You can't be in communications today, , without having the capability of creating your own content and understanding the right channels to get the content out and understanding the new influencers out there, right? You can't, you'd be a dinosaur today if you said, well, let's call the Wall Street Journal and the f I mean, who, you know, it, it, it, but that, that, that could have been anticipated, um, as an example. Um, and the other is the threat that social media, when something blows up on social that is threatening to your brand or to your leadership or what have you, that you need to have very good response mechanisms. So what's the equivalent for ai? Let's start with the latter case. If CCOs and their teams are not today running scenarios of what they're going to do in the response to a deep fake attack, they're gonna be caught.

Speaker 4 (10:30):
Short, short, this is easily foreseeable. This is inevitable that a deep fake attack is gonna occur to a major brand. And, and you cannot be caught flatfooted there. So treat it like any other crisis response. Be proactive. Run the scenarios, have a backup plan. Know who to reach, who to contact. Line up your forensic experts now. 'cause the first question that's gonna be asked is, is what we're reading, seeing, or hearing real ? And if you say, I don't know, you know, you're, you, you know, you're, you're, you're gonna be replaced one day. So don't, don't let that happen to you. So that's just good brand protection and reputation management. And the other is as fast as you can, you know, have you and your teams and your agencies insist that they start to experiment and explore, you know, what, what the technology is capable of. And that's been commented on already, and I just couldn't, you know, agree with that. More

Speaker 2 (11:27):
Listening to John. Um, I, I wanted to just point out, I think that there's a couple other trends that, um, are going to affect the level of influence that our, that the communications profession potentially has, but also the roles that we have to play within organizations and even as, even as individuals. And this is the, to, to me actually, it's, it's the changes in behavior that our, our stakeholders are, are going to be or are experiencing already that we need to kind of deal with. Um, and the changes to, I think marketing as a whole, and to me it's like, it's a couple of major things. Like first, first is change in search. So we're moving away from search to chat. So what, it's a tactical issue really, but that means we really need to rethink the way that we structure our websites. And we need to think about, about how we, uh, how we create or, or, or post content or if websites are even necessary anymore.

Speaker 2 (12:24):
And, you know, which parts of websites are gonna be elevated versus, versus the way that they are now. Like about sections are gonna be way more important than they have been. So we in communications, have a really important role to play here, because the content needs to be written in a very non-marketing style. It has to be, it has to be conversational. And conversation is our superpower as as communicators. So understanding how to create narratives and draft and create, uh, uh, content in that manner is going to be a mission critical. Um, I think, uh, a point for corporate infrastructure and for, for digital infrastructure. Second thing is the migration. I think of our audiences and our stakeholders away from the visible to the invisible. So you can't, increasingly, you can't reach Gen Z. You can't, they're, they have, you know, unless you're, you're running kind of, uh, these like Insta ads, uh, you know, you're generally speaking, it's very difficult to reach them with advertising.

Speaker 2 (13:25):
Um, and even, you know, they're not necessarily adopting, uh, email so much yet. Although I think, I think that's, that's gonna happen more. They're moving to dark social and places that you can't reach them. You can't measure them. Um, how do you get in front of them? How do you influence, how do you, uh, how do you get involved or how do you create competing platforms that are inviting enough for them? And, you know, AI is not the answer to that, but AI is a huge enabler of this because in, in order to be, in order to, uh, to create the amount of content that you're gonna need to have, you know, to have happy subscribers and to have, um, a really good, um, you know, additional platforms perhaps, and I don't know what those look like yet, you, you're gonna need a lot, a lot of content to make that all work.

Speaker 2 (14:13):
And, um, you know, there's not enough, enough people or funding really to make that viable. So, I mean, you know, AI is a very good enabler of that kind of thing. And, you know, I think where it's all leading is as our, as, as Mark Schaeffer, a friend of page and, and my friend and, you know, uh, author of, um, and, and I can't recommend this book enough belonging to the brand. Uh, why community is the last great marketing strategy. You know, I think it's all heading towards community. And what can, what can community be to us? I mean, it's a, it's an old concept. It's not new, but how can communicators and CCOs specifically own communities that are, you know, that are sort of branded or that, that are that welcome brands? And I think the, I think the c c O has the role to play here.

Speaker 2 (14:59):
Um, again, 'cause communities are based on relationships, they're based on trust, and they're based on purpose. They're based on mission, they're based on, um, on, on conversation and, and authenticity. And frankly, marketers haven't been historically very good at that. Um, whereas communicators, I think have a lot to offer there. Um, and mixes of both marketing and communications, of course, I think that's part of what the future of C C O is, what the future of our roles are here in, in our, in our great grand profession, which is going to meld, meld and merge in everything with marketing more and more, um, as we go forward.

Speaker 1 (15:36):
I wanna turn, John kind of mentioned this a moment ago. I feel like we need to talk about the risks and the need for regulation to reign in or manage some of those risks. John, at the sort of societal or civil level, I know the data and Trust Alliance is advocating for regulation standards, adoption of practices

Speaker 2 (15:58):
That

Speaker 1 (15:58):
Protect the integrity of people's privacy and data. When you're thinking about the risks that these forms of AI pose things like DeepFakes, like you, uh, mentioned a moment ago, uh, what sorts of regulation do you think need to be put in place?

Speaker 4 (16:14):
It's, it's early to, to suggest what those might be. But there are some principles that I think represent the views of most of the members of the Data and Trust Alliance. And in some, in some of the things I'll mention to here, they are reactions to some contemplated regulation in the eu and to some degree the United States and other, other places. One is the, uh, notion to regulate not the technology, not the core technology, but regulate uses of the technology based on risk. And this is not new in regulation, right? If, if you're gonna impact people's health or children or, or public safety or civil liberties, that's a higher risk use se series of use scenarios or use cases that need to be, that could be regulated before you deploy a technology. I mean, we take that for granted, say in in drugs, in, in, in food and drugs today, right?

Speaker 4 (17:10):
There's a different threshold for food than there is for pharmaceuticals. And that represents a difference. You know, you're both ingesting things, right? But the use cases have a different spectrum of risk. And I think most of the companies that we work with are in favor of that kind of regulation, um, of various, of, of one form or another. Another is who regulates. And, uh, we, I think most of the companies in the alliance would favor not creating a new reg regulator regulatory body, but, but enforcing and insisting upon the existing regulators to, uh, to, to consider appropriate implications of AI to what they already regulate. For example, there are regulators who regulate, you know, the transportation industry. And, and with autonomous vehicles a reality, uh, it, it makes eminent sense for them to, to consider ai, uh, in, in that, in that regulatory body and equal opportunity.

Speaker 4 (18:15):
The E E O C already exists. The Department of Labor already exists. So the degree to which AI could, could create harms when it comes to workforce practices, it makes sense to have those regulatory, those regulators, uh, retain responsibility, um, but add to their, to their purview, you know, the new, um, and I, and I think that, that, those are just two examples here of not regulating the core technology because it's impossible, because the risk there is that you're gonna limit innovation and breakthroughs. But, but do think about use cases through a risk lens. And the other is don't create more bureaucracy, yet another regulator, uh, use what exists today.

Speaker 1 (18:59):
Dan, what about at the corporate level? You know, some of the risks that I hear about are, you know, copyrights, protections and things like that. What, what are you thinking about in terms of protecting your company against risks like that? I think we're,

Speaker 2 (19:11):
We're continuing to think of the same things that we've thought about, just it's accelerated and it's, you know, it's as if it's it, what John said earlier, it's, it's about like understanding that, you know, deep fakes are out there and, uh, and the potential to create, you know, I guess damaging content or be damaged by things, um, should be treated with, I guess, uh, the, what we know as communicators and as as solid, uh, crisis comms practices. And that's still important. But as far as regulation goes, I think it's gonna get, be a game of whack-a-mole from this point forward. I, I, I don't know if, um, if regulation's gonna help or hurt. I, I agree with John that, you know, the, the tendency to react will actually, uh, squelch innovation at the same time. There are bad actors out there. So how do you make those, those discern?

Speaker 2 (20:10):
How do you discern the good from the bad? And, you know, at the corporate level, the only thing we, the, the things that we can do, of course, are, you know, establish our own copyright guidelines. We, we talked about this earlier, we, we don't wanna use things, we don't use use images created from by, uh, that may be based on somebody else's images. You know, there's, there's too many loose ends there. We don't wanna plagiarize content, we don't wanna make, we want to make sure that our own proprietary content is not put out there into the universe and then used by a bad actor. So, you know, there's, there are, there's self-regulatory issues, I think that we need to deal with, but I think we're still trying to, we're still sorting all that stuff out. I, I, I, I'm hesitant to kind of like predict where this is going.

Speaker 2 (20:54):
Um, but I, I do think it's going to, it's gonna continue to be a pitch battle. I, I think, and get, get more and more. So, you know, it's interesting that, that OpenAI recently just stopped. Its, um, its AI detector, you know, um, activities. It's kind of impossible or it's getting less and less possible to detect those fakes and to detect ai. So how can a regulator regulate that? It's, it, it, it speaks of either incredible, um, kind of privacy violation issues, policing issues. I, I wouldn't know where to begin really with, with how far they can go or how far they need to go. Um, so really, I think self-policing, self-regulation on a corporate side is about, and I'm speaking primarily about corporations, is, is, is the kind of way we need to go.

Speaker 1 (21:47):
So this kind of, Dan comes back to, to the point you made earlier about critical thinking. I think Brittany made this point as well. If you can't foresee, you know, and protect against every instance of this, the ability of people to distinguish true from false or right from wrong becomes more important.

Speaker 2 (22:06):
I mean, I, I, I think that we still have to game it out. Like you, like every corporation, certainly every, anybody who's involved with risk management, with, uh, with, in communications with crisis and with brand risk, do the same thing you've always done, uh, scenario planning, understanding the, the threats that are out there, create your, your, your, your risk analysis and your threat matrices, and whatever method you choose, stay on top of it. And I think it may take more, uh, in at least the near term, um, more kind of resources to, to stay on top of this than it has in the past. Um, of course, but it's the, the, you know, good risk management practices and, and good crisis, uh, management and reputation management practice cannot be underestimated. But you have to accelerate now, you know, kind of expand the remit, look deeper into what might be out there, and approach everything with suspicion and critical thinking as you said. I mean, you know, I, I can deep fake myself. I have, I went to 11 labs. I have my voice. I can do myself reading anything. It sounds better than me, which is how, you know, it's AI . 'cause there's no ums and ahss, but it is, uh, it, it, it's easy. Um, and knowing that it's easy, I think we have to also understand that it's easy to perpetrate some bad things too.

Speaker 1 (23:24):
Brittany, what are you advising clients?

Speaker 3 (23:26):
So I might draw a connection between what John said and what, uh, Dan said that kind of taking from the society perspective and bringing that into the corporation. I think one big area of risk is AI, maybe accidentally violating existing regulations. So when I think about some, the regulatory framework, one area I wanna see is just existing regulations enforced. And I'll give you a a concrete example that I think many CCOs can relate to. A lot of companies have pricing algorithms that are quite AI driven. You can think about sports teams, uh, hotels, and essentially as demand increases, the price will go up. You know, tickets will get more expensive if a lot of people are buying the ticket or hotel prices will get more expensive as, uh, inventory goes down, right? And that I think makes a lot of sense, right? Supply and demand.

Speaker 3 (24:22):
We'd all kind of probably say, oh, yeah, that's generally a good use case. But if you're a hotel company and your algorithm starts to push prices up as a hurricane approaches, well now you're price gouging, right? And very likely you didn't build a mechanism for that into the AI or into the algorithm because you just told it, maximize the price while minimizing available inventory. Like you could have told it to do those two things. So I think a big role of, of communications and of the C C O is poking around the company and, and trying to find those gaps, poking around and saying, thinking about the regulations that your company is most vulnerable to, and then essentially playing devil's advocate and saying, how could our technology put us at risk there? Because it is not only a regulatory risk, it is absolutely also a reputational risk.

Speaker 3 (25:24):
If you are price gouging during a natural disaster, that's gonna look really bad for your company, even if it was an accidental oopsie, you have really major implications. So what we're counseling our clients on, from a risk perspective, yes, absolutely copyright bias over optimization, accuracy, yes, confidentiality, all of those things. But where the role of the communicator is, in my view, is going around the organization to the other functions and saying, how are we using this technology? And bringing scenarios and saying, could this happen to us? Uh, because a, if a human was in there setting the prices and a hurricane was approaching, they would could have instinctively know, don't raise the prices. That's unethical. The computer doesn't know that. And I think the role of a role of comms is to help, you know, sometimes you hear, I think it's a bit of a cheesy expression, but that comms is the conscience of the company. And I think this is a time, this is a moment in time where that is a, a role we should step into

Speaker 1 (26:27):
Good advice. Um, in a previous episode of this podcast, we heard from Howard Pyle, who's a speaker at our spring seminar. He's dyslexic. And so he's talking about that there's no such thing as an average user, and he thinks generative AI and its ability to code will enable us to create tailored experiences for each individual on the fly. So you won't have just one app for everybody. Each person's app will be tailored to their unique needs and whatnot. And I thought that was really exciting and interesting just to think about the implications of that ability to personalize experiences in that way. And I want to ask each of you, thinking about the potential of ai, what's one thing that really excites you about it, whether they're there, whether we're there or not, as you look into the future, what gets you really excited about its potential?

Speaker 4 (27:22):
Listen, I, I think AI is, is the, is the latest, perhaps most powerful capability in a long line of technologies, um, that have enabled, uh, humans to, uh, push back against approximation, guessing, averages, gut tradition, superstition, uh, and things that were, uh, not precise, uh, not accurate. And, and so while AI is like, wow, this whole new thing with machine learning and neural networks, uh, yes, that's true. And as we've discussed now at length, it's unique and very different. Um, but ultimately the promise of this is to give us, uh, a tool that will help us be more precise, more accurate, and more therefore, more confident so that we, our workforces truly will be more inclusive. Medical diagnoses will be more accurate, disease progression will be known and therefore treated or stopped. Um, traffic jams will be predicted and foresaw energy grids will not, you know, collapse. Uh, financial markets will not be so risky. Supply chains will not suffer you imbalances. Prices will reflect, you know, supply and demand more accurately and on and on. That's the promise here. Um, the risks are, are are obvious as we've discussed for, for some time.

Speaker 1 (28:59):
I'm excited about, about so much with ai. It's hard to narrow it down, um, being a podcaster and a, and a talker and, and a internal events host and these kinds of things. And I'm, I'm really, uh, very optimistic about my ability to talk about this stuff more, which is fun. Uh, but, but I'm really excited more about, I think there's a creative renaissance that's happening or about to happen, um, that we may be overlooking. And, you know, there, of course, we talk about the, the positives and the negatives, the pros and the cons of, of ai. And, you know, there are, there are copyright issues and so on. We, we get that.

Speaker 2 (29:41):
But even with that being the case, people who couldn't, you, you talked about somebody with dyslexia and now being able to, to, to kind of harness some tools that they would never have been able to do easily before, um, this isri write that large ex, you know, scale that out a bit more and think about all the people who have amazing ideas and, and visions, uh, creative visions. Not, not the bad kind of vision, the creative kind of visions and have a, have an itch to scratch who now can try. And I think there's gonna be more and more of this happening, more people who are, who are creating what you might wanna call digital art people who are set, who are writing more people who are building communities based on, you know, themes that they're now able to evolve into larger, larger kind of palettes that, that create use, that create larger experiences.

Speaker 2 (30:39):
Um, you know, I, I do think that, um, you know, that people who would normally not hold themselves up as creatives can now say, wait a second. You know, I, I can open up a whole new area of my, of my mind. I have a very good friend and, uh, who, who is a podcaster who really had a lot of difficulty kind of building a, a digital presence for his own show. And he's, he's excellent. Chachi Petit came around, he said, wait a second, I can just kind of feed it my ideas and it will kind of, it'll build the content and I can even optimize it by asking it to, he is, uh, you know, he's dyslexic, built a website in two days, right? Has a fantastic community building, building, uh, building up now, um, and his show, rock Salon. So just one example that would not have happened before, right? So I think, I think we're on the cusp of a creative re renaissance. I feel like I'm part of that. I love, I love creating images. I haven't ever done that before my whole life. So I don't know. I, I think that's, uh, that's, that's what I'm most excited about.

Speaker 3 (31:46):
I'll give a society answer and a work answer . So, uh, I, I think, uh, our lives are gonna get better, you know, as technology gets better. I mean, for example, I don't have to get up to turn the lights off anymore. I can just tell my home assistant to do that. That's so great. Uh, you know, I, I think there are a lot of technology improvements that can come, uh, with ai. I, I mean, I lost, I lost my mother to ovarian cancer. So I think that the medical improvements are very, very exciting. Um, and, you know, I think the preventative medicine, I think can actually be so much more the forefront. You know, most medicines today kind of waits until something goes wrong and treats that. So I think there's a lot of really cool opportunities just because AI gives us a scale.

Speaker 3 (32:36):
You know, we talked about doctors earlier there, there's never enough doctors for everyone. This can give us some scale, uh, in all kinds of ways. So I think that's really, really, um, exciting. Um, I also kind of hope it makes society better. I mean, I think some things came out of Covid that are potentially better for society. Obviously a lot of very negative things. Um, but many things that we're debating today, like, you know, working in an office, working at home, that kind of stuff. Um, I think AI has a lot of potential to, maybe we can all just work a little bit less. You know, we were talking a lot about saving money. What if we didn't save money, we paid the same amount of money, but we just had 30 hour work weeks instead of 40 hour work weeks. I mean, I think that sort of stuff is possible with AI now, we'll see if that kind of happens in society, right?

Speaker 3 (33:22):
But I think that those sort of things are possible from a work perspective. Um, you know, communications perspective. I'm really excited, this is kind of generic, but about the value comms is gonna get to deliver to the enterprise. I think we're gonna be really, really valuable. I think our council is going to be critical, and I think, uh, if we step into the role, we have an opportunity to help our enterprises and organizations use AI to be transformative. Um, i, I think it's such a moment of opportunity for communications that, you know, to echo some of John's points that we haven't seen since the advent of, uh, you know, of the internet.

Speaker 1 (34:08):
That's it for this episode of the new C C O. We've been trying something new with this miniseries, bringing you a deep dive with a range of opinions on the issues impacting our profession. If you enjoyed this series on AI and would like to hear more episodes like this, or you have a suggestion for a topic we can cover in the future, please let us know. We'll see you next time on the new C c o. If you enjoyed today's episode of the new c C o, be sure to check out our latest episodes and subscribe wherever you get your podcasts. While you're there, leave us a rating and a review. We want to hear what you think so that we can keep making this podcast more interesting and valuable to you. To find out more about what's happening at Page, please visit us@page.org. Special thanks to Rivet 360, our podcast partner, without whose support, we simply would not be able to bring this podcast to you. Thanks so much for listening. We'll see you next time on the new C C O.