PCMA Convene Podcast

Is AI making us better—or just busier? In this episode of the Convene Podcast, we explore surprising research on how AI boosts productivity but drains motivation. From ethical use to emotional impact, the team shares insights for event professionals navigating AI in a rapidly changing landscape.
 
Links:
·       Research: Gen AI Makes People More Productive—and Less Motivated: https://hbr.org/2025/05/research-gen-ai-makes-people-more-productive-and-less-motivated
·       The Science of Aha! Moments: Designing Events for Maximum Inspiration: https://www.pcma.org/science-of-aha-moments-designing-events-inspiration/
·       Can I Say That?: https://www.amazon.com/Can-Say-That-Poornima-Luthra/dp/1292737131/
·       How two major newspapers published a summer reading list with books that don’t exist: https://www.poynter.org/commentary/2025/chicago-sun-times-summer-reading-list-ai/
·       OpenAI rolls back update that made ChatGPT a sycophantic mess: https://arstechnica.com/ai/2025/04/openai-rolls-back-update-that-made-chatgpt-a-sycophantic-mess/
·       Reddit - ChatGPT induced psychosis: https://www.reddit.com/r/ChatGPT/comments/1kalae8/chatgpt_induced_psychosis/ 
 
Get News Junkie: https://www.pcma.org/campaign/news-junkie/ 
 
Meet the Convene Editors: https://www.pcma.org/contact/ 
·      Michelle Russell, Editor in Chief
·      Barbara Palmer, Deputy Editor
·      Jennifer N. Dienst, Senior Editor
·      Kate Mulcrone, Managing Digital Editor
·      Magdalina Atanassova, Digital Media Editor
 
Follow Convene:
LinkedIn: https://www.linkedin.com/showcase/pcma-convene/ 
Instagram: https://www.instagram.com/pcmaconvene/ 
YouTube: https://youtube.com/@pcmaconvene 
Medium: https://medium.com/@convenemagazine 
X: https://x.com/pcmaconvene  
Contact Information: For any questions, reach out to Magdalina Atanassova, matanassova(at)pcma(dot)org.
Sponsorships and Partnerships: Reach 36,000 qualified meeting organizers with Convene, the multi-award-winning magazine for the business events industry. Contact our sales team: https://www.pcma.org/advertise-sponsorship/
Music: Inspirational Cinematic Piano with Orchestra 

Creators and Guests

Host
Magdalina Atanassova
Digital Media Editor at Convene Magazine
Editor
Barbara Palmer
Deputy Editor at Convene Magazine
Editor
Jennifer N. Dienst
Senior Editor at Convene Magazine
Editor
Kate Mulcrone
Managing Digital Editor at Convene Magazine
Editor
Michelle Russell
Editor in chief at Convene Magazine

What is PCMA Convene Podcast?

Since 1986, Convene has been delivering award-winning content that helps event professionals plan and execute innovative and successful events. Join the Convene editors as we dive into the latest topics of interest to — and some flying under the radar of — the business events community.

Convene Talk, ep. 65/May 30, 2025

*Note: the transcript is AI generated, excuse typos and inaccuracies

Magdalina Atanassova: This is the Convene Podcast.
Just before we start this conversation, I wanted to make a very brief announcement.
We have a new editor joining our team.
You're going to hear now from Kate Mulcrone, our managing digital editor, who joined us earlier this month.
Please join us in welcoming Kate on board and having her contribute to this and future episodes of the Convene Podcast.
Now back to the program.
Welcome to another episode of the Convene Talk. Barbara, can you share more about today's topic?
Barbara Palmer: Thanks, Maggie. Yeah, we're going to be talking about AI today,
and this is based on a story from the Harvard Business Review that showed that people are more productive when they use AI,
but they're also less motivated.
It's a little counterintuitive because it showed that when people work with AI, they're engaged.
And then when they go back to working without AI,
they are less engaged. They're like 20% more bored and 11% more less motivated.
And it is a little bit like it was a little hard to wrap my head around because the reason is that when AI removes the part of the task that you really use your brain and you're engaged with,
then when you go back to not using it like that part of your brain is not online as much.
And I just feel like that is like, it just rings so true about some of the thoughts that I've had about using AI.
Like some of the tasks that you use it for, when it takes data and it analyzes it and it extracts the key points,
those are things that really do engage you. You're like, really like things that you like to do.
And it, what I, what I really thought about is this story I just did for Meetings and Your Brain about where insight is located. Those aha moments,
what are the conditions that will help those appear,
which are like,
you know, being relaxed,
not really zeroing in on a problem,
just stepping back a bit.
And one of the points that the article made, it was published in Scientific American,
is that it also insight gives you a hit of dopamine.
And I think that we all can relate to that.
I don't have AI do Wordle for me because it's fun. Like it's fun to figure stuff out.
And so I did also look at some other research about this related to AI and creativity.
And one of the very interesting things that these researchers talked about is that AI takes away failure as,
as difficult as failure feels.
You learn from that. It's like a way that you get solve a problem is that doesn't work that doesn't work.
I'm just very interested to see what you all think about this too, because it's something that just suddenly seems like everybody is talking about what AI is taking from us in addition to what it's getting giving to us.
What are you thinking about, Michelle?
Michelle Russell: Well, just like you, Barbara,
I found it counterintuitive for the study to show that when people stopped using AI to do another task, that's when they were bored or disinterested. And it seemed to me that if you're asking,
just the way I would think is if you're doing something with AI and it's doing the work for you, kind of, that's when you would be bored and think that you're not necessary.
But they didn't talk about this. But I wonder if there's like a novelty element to it that people are still, you know, working with AI is new,
so there's more engagement and feeling like, oh, maybe I could make that prompt more interesting or whatever. So you're, you're that novelty of like trying something new or working with something that you're.
You haven't been using for a very long time is kind of where I thought, oh, that could also account for that.
But again, they didn't say that in the study.
It just seemed to me that could be one of the things that makes more sense to me.
Barbara Palmer: Just to respond to that, Michelle,
I saw another story that didn't talk about that. It talked about just that your attention is affected.
It's like, what these students. It was an MIT story and it was just like, you're like students that interacted with ChatGPT their attention.
It just took it offline.
So, Maggie, what do you think about that?
Magdalina Atanassova: I just got hooked on that stat.
11% motivation dropping by 11%. It doesn't seem that significant to me.
I don't know, it just, I'm trying to,
you know, look at the big picture.
Maybe in the grand scheme of things and how productive you are at work,
it matters, but it's also,
I don't know,
is it that significant? So I'm thinking about that, but something else that kind of connected in my brain. So I'm doing this masterclass with Malcolm Gladwell,
who's fantastic and super fun,
just to see how his brain works.
And when you said AI takes away things failure,
I just connected these dots that he says that actually Google does the same for us.
So when you ask Google a question gives you a precise answer, it doesn't give you anything interesting or adding on to the, you know, when you ask a question.
But maybe with AI hallucinations,
that's our little added adventure to what's happening, right? Because yes, it gives you an answer, it can be the correct answer, but also you have to be very mindful and check if it's hallucinating because it often makes things up.
So at this point, I think there is also this factor of excitement. And like Michelle said,
it's new.
So we're still trying to figure it out and kind of make it a part of our day to day.
Just like we were so excited back in the day to get an email.
And nowadays we're like, oh, my God, I don't want to open my laptop and see all these emails because they're crap.
So maybe we're just in this interesting phase of,
you know, for some people,
it's new, it's exciting. For some others that are further ahead, it's just already, I don't know, maybe it's going on the other end.
Jen, what do you think?
Jennifer N. Dienst: I agree with y' all. I think it's like a novelty thing.
I am not the biggest fan of AI when it comes to creative endeavors, specifically, like, creative tasks. I think,
yeah. Can it get, like, the ball rolling if you're trying to come up with, like, a headline for our listeners? Like, I'm a writer, I'm an editor.
That's what I do. And that's. That's the kind of creative task that, like,
I've tried before.
However, I, I completely agree with you, Maggie. Like, it. It.
You know, we have a lot of evidence that it just kind of makes things up out of thin air.
There's a really good article that we were passing around yesterday from Pointer. If you don't know what Pointer is, it's. And it's spelled P O Y N T E R.
It is an organization based in Florida all about journalism,
and they publish their own journalism. And it's a fantastic outlet. It's a fantastic organization. But they published a piece about how the Chicago Sun Times and the Philadelphia Inquirer published an article that was almost completely inaccurate.
And what happened is it wasn't a piece that their own writers wrote.
This was a article that was written from the parent company. It was licensed content. So if you're not familiar, sometimes newspapers and magazines,
if they need extra content, they'll pay for license content that comes from another entity essentially. So that's what happened here. The writer that works for Kingfisher,
which is a unit of the publisher, Hearst Newspapers, wrote a story listing the top 15 books to read for the summer.
And except that it seems only five of the books mentioned are real,
the rest are made up by AI.
So the writer has actually come out who was responsible for this piece and said, I am so sorry, this is all on me. Yes, I used AI to help me write this piece and I didn't fact check it.
I didn't actually,
you know, I just trusted that everything was accurate and,
you know, it's all on me. So I think that even when it's put into a professional's hands, I think, you know, this is obviously a person who writes for a living.
Everyone messes up. You know,
maybe he was under a deadline, but I think it's a really good example of how you can't completely trust that everything AI feeds us is accurate anyway. So. But I still, I still think there's so many good other use cases for AI.
Maybe just the creative,
you know, side and is not really one of them for me at least. And we just have to remember to fact check things.
Kate Mulcrone: Jen, you make a great point about,
well, AI can do creative tasks, but it's not good for everything.
I thought it would be good to dig into the research of the story Barbara shared just to talk a little bit about what were these participants in the study working on AI with?
And so they did four real world professional tasks and it was writing Facebook posts, brainstorming ideas,
drafting emails,
and also working on a performance review.
And I thought this was so interesting because of these tasks,
I would definitely use AI to help me with a Facebook post and with brainstorming,
but never with writing an email or a performance review.
And I would just wonder how these tasks were selected.
And then it also speaks to Jen's point about what is a duty of care in this situation.
Is it really fair that that King Features reporter fell on their sword when they're probably being pushed a lot to use AI at work.
And maybe there needs to be more conversations not about are you using AI,
but what do we as a group at our company,
what do we use it for and what do we not use it for in terms of setting expectations and keeping people aligned with what they're actually supposed to be working on.
Michelle, I will pass it over to you.
Michelle Russell: Thanks, Kate.
So I'm going to go a little bit off topic, but just because Maggie mentioned this, Barbara mentioned this, and then Jen gave a phenomenal example of a failure.
I'm reading a book called Can I say that for our next issue where we're talking about DEI in The current environment.
And the author brought up this concept of intelligent failure,
which I was just taken with, because.
Which the concept was developed by Harvard business school professor Dr. Amy Edmondson. And in her book the Right Kind of Wrong,
she challenges us to rethink how we look at failure.
So we used to look at failure as a total negative,
something to be avoided at all costs.
And then in the, you know, the era of startups, it was like, oh, it's desirable. Now we gotta fail fast and fail often.
But in her book, she argues that neither extremes enable us to distinguish between good failures and bad ones.
And she suggests that there are three forms of failure, complex, basic and intelligent failures.
And they vary in terms of increasing uncertainty and reduced preventability,
meaning that basic failures are those that have the least uncertainty and the greatest chance of being preventable. I would say probably the book list would be one of the basic failures because that was easily preventable if that person had or if anyone down the line had taken the time to actually verify that these titles exist.
While intelligent failures have the greatest uncertainty and lowest preventability,
complex failures occur in familiar settings,
but where multiple factors interact in unexpected and uncontrollable ways.
So I just thought this is a really interesting way of looking at failure and makes me feel better to think that we could all be capable of intelligent failure sounds a little bit better.
Barbara Palmer: That was a great kind of explication of failure because,
I mean, failure and learning are so intertwined.
And I'm going to take a little leap too, because I read a story just, maybe just yesterday,
it was more of a column about AI slop,
which is the term that they're using for what they used to call spam is now slop.
And it is like those really images generated just.
And it is like there's not even an adjective to describe how much of it there is.
And one of the. I heard somebody speaking about how AI is like a sycophant.
Okay, so have Anybody ever asked AI to write your bio based on your LinkedIn profile?
It is hysterical.
Like, according to that,
you know, I am at the top of the mountain in terms of my skills and my.
And my theory is it has learned from LinkedIn to exaggerate, to take anything you do and say,
this is the best, best.
And I had an experience recently where I was responding to somebody who was criticizing something I'd done.
And so my intention was to be very,
you know, like very.
Not take it personally, be very factual.
And the response I got to that was like.
Like a warm bath. It was so kind.
I just wonder what is going to happen to us if we stop expressing ourselves.
Like if AI takes everything and puts it in a blender of like, oh, you should be warm, you should be,
you know, and I just,
I think that there is a very real danger of like, I'm thinking about, just when I think about the events industry and I think about those things that have really changed things.
When people really go out on a limb.
When Ted, for instance, said,
you know,
we don't have to stay in this building, we can take this out and do it. And not just Ted, but when people say, oh, let's open the door and go hold us on the beach or let's do this or let's do that.
But if AI is just generate, they're not synthesizing things that haven't happened yet anyway. I just think that the AI slop thing is a huge danger.
Michelle Russell: I just wonder if we're more sensitive to AI because we're writers than other people are. So I think we can pick up on an email that was generated by AI faster than someone who's just used to like corporate speak.
Because I have put some stuff together for people just like to check a box and they like it because to me, I think it's because there's just a lot of like over the top kind of language.
And I, I personally don't write that way and I think that people who,
who write for a living recognize when something is got that kind of over the top marketing kind of edge to it.
Magdalina Atanassova: Well, for some emails I don't mind using AI,
especially in situations like you say there's some people that you just need to answer to and you just need a filter. It's a helpful tool so that,
you know,
you keep it peaceful.
Let's put it this way,
because sometimes things can become ugly. I just wanted to bring it back to our industry and what this may mean for our industry.
And so I was thinking can we observe that same scenario where,
you know,
boredom and motivation drop just because you're not using AI?
So I'm really curious to see or I don't know if we can measure that in our industry.
And I can't imagine,
or I actually don't want to imagine that. But a dark future for us would be if event planners who are so overworked, rely so much on AI that they stop paying attention to the details because that's the core of the industry and why event planners are hired in the first place.
But with these tools entering so much, I wonder if they will just become bored of looking at the small details and making sure that this is correct and they'll just copy paste and go with it.
So that's my fear.
Barbara Palmer: Well, you know,
thank you for bringing it back to that.
Because I think that these tools that are designed with AI within this container of the tasks that meeting planners do,
I think that that's an advantage because they're the experts and already, I mean some of the people that I've talked to about Spark have mentioned that, that they don't use it.
They know their voice isn't there, they hear it when their voice isn't in it and they put their voice in it.
And I think that's also very true when you say about the details,
like I think that they know better about the details of their events and what they want.
And the thing that I think is really great about the way Spark is set up is that you can go back and forth. Like this article says,
you should go back and forth between using AI and then the co creation with AI because you can use something that's preset like make this and then you go over to the chat feature and then you start refining it and like making it your own.
But I think that whenever there's an area that I know a lot in,
it's I'm always more comfortable with AI because I'm like, oh, that's wrong or that's incomplete or no, you totally missed that.
So I think that that is an area where those purpose built tools are an advantage over like the open sea that is other tools.
Michelle Russell: Michelle,
I think one of the reasons that event planners use Spark or AI is for course descriptions, right? Or for session descriptions.
I think those need to be watched carefully because I think AI has a tendency to oversell things. I hate overselling. When we write,
I always say like don't oversell what we're, what this story is about because people will just be disappointed. And I think the same thing would apply for a session description.
If it's sort of over the top and maybe there's some things on there that aren't actually going to happen at the session, people would be disappointed.
So that was the one thing I wanted to say. And then Barbara, just following up on what you said, that was something I took away from the Harvard Business Review article, is that in order to combat that feeling of being bored,
they suggested that in your workflow you just go from one thing from doing something that's AI helped help aided by AI to something that isn't. So that you do that throughout the day.
And very much in keeping with a PCMA EMEA meetup that happened last month where they spoke with a time management expert.
Do those things that require heavy thinking in the morning and then maybe in the afternoon, use AI for the processes that are a little less intense.
So I kind of put those two things together.
Kate, what do you think?
Kate Mulcrone: Thanks, Michelle. Yeah. Building on what both you and Barbara have said about exaggerating and being overly positive,
there was an article in Ars Technica that talked about how in April 2025,
OpenAI had to roll back their latest public beta version of ChatGPT because people's chats were getting insanely sycophantic. And I'll just read a little bit from the article.
The AI went from generally positive to the world's biggest suckup.
Users could present ChatGPT with completely terrible ideas or misguided claims, and it might respond,
wow, you're a genius.
And this is on a whole different level.
And so the article says the model's unending praise can lead people who are using AI to be fooled into thinking they've stumbled onto something important.
In reality,
the model has just become so sycophantic that it loves everything.
And that is something that's just gonna get worse with time as more people use these tools.
And so it'll be interesting to see what the industry comes up with to combat, like, I guess we could call it toxic positivity.
Michelle Russell: I was laughing so hard when you were reading. It was really funny.
Kate Mulcrone: I mean, it's more of a rabbit hole than we want to get into. But if you go on Reddit and you go to the ChatGPT subreddit and you search for ChatGPT induced psychosis,
you will find, I'm not kidding, you will find really sad stories.
It reminds me of what happened with QAnon recently, where people will just,
they. There are like actual human beings who think that they have stumbled onto God in the machine and their real life family members, like, are afraid to tell them that they don't agree with the chat.
It's really disturbing.
But I mean, that's obviously not a business use case scenario, thankfully.
Barbara Palmer: Well, you know, it does. It does bring up, like, just the response. You said duty of care. What is the responsibility of setting up guardrails? Like, making sure.
Kate Mulcrone: Exactly. And I don't think these conversations are happening even as we're all being encouraged. And I think we should be encouraged to use AI as much as seems suitable at work.
But part of that is talking about what isn't suitable.
Magdalina Atanassova: This just reminded me of,
I think, one of my biggest takeaways from the AI season that we did from that season, season six. It was all dedicated on AI in the industry.
And for me was everybody, all of my guests said there need to be clear directions given by companies to their employees.
What you can share, what you cannot,
where and how you can use these tools.
Always, always, always using the paid version,
if possible, enterprise version, which has bigger protection in terms of data and you know, the prompts that you enter. You're not training the tool with your prompts.
And I feel that that's very important and should be top of mind for everyone and everyone,
every professional using whatever AI tool or tools, because I feel it's now normal for those that are really advanced and feel, you know, comfortable with, with AI to have different tools for different things,
managing your AI bots,
just being mindful how you do that and what information you input.
And of course the company has to be providing this guidance and this information. It cannot be just left to everyone's goodwill.
And I believe that's the reason why companies have to really cover the cost and not just let employees fly and, you know, use whatever they feel like and whatever's discounted, whatever's free,
because then you end up with really not ideal situations where a lot of important data is being leaked.
And I'll link to that season and I'll try to link.
I'm not sure if I can get to all the Reddit sub. Sub things, but I'll, I'll do my best. Kate, maybe you can send me the link to that and I can, I.
Kate Mulcrone: Can send you that. I don't know if we want to put people.
Jennifer N. Dienst: I just want to read it.
Barbara Palmer: Just.
Kate Mulcrone: It's really interesting. I'll send it.
Jennifer N. Dienst: I want to go down that rabbit hole.
Magdalina Atanassova: Right.
So thank you all for the great conversation.
Remember to subscribe to the Convene Podcast on your favorite listening platform to stay updated with our latest episodes. For further industry insights from the Convene team, head over to PCMA.org/convene. My name is Maggie. Stay inspired. Keep inspiring. And until next time.