A podcast on AI and the Church
Peter Levenstrong (he/his) (00:01.134)
Welcome to episode 12 of AI Church Toolkit, the podcast that equips church leaders with practical tools for faithful ministry in a digital world. I'm Peter Levenstrong, here with my co-host, Mercedes.
Mercedes Clements (00:14.262)
And as we wrap up season one, we're taking a moment to look back on the topics we've covered and reflect on what's changed since we started the podcast. It's only been about six months, but the tech industry moves fast. And we've seen both benefits and risks of the technology in that time.
Peter Levenstrong (he/his) (00:32.9)
Before we dive into that, we first wanted to start the way that we have been starting all our interviews as we now turn to each other in this final episode. We haven't asked this intro question of each other yet, so we'll do so today. Mercedes, what sci-fi world do you imagine that we are living into in this present moment of AI development? And why do you think so? And what do you think that means for all of us?
Mercedes Clements (01:01.678)
so I had to think about that one for a while. And it's weird because I read a lot of futuristic science fiction. So if I go with the present moment, the most prescient writing, I would say I would lean into the works of Neil Stevenson, a futurist thinker.
like Fall or Dodge in Hell, one of his novels. Unfortunately in that book he describes only too well how media manipulation and AI filtering of all the information in the world can shape a world where everyone lives in their own echo chamber of news and facts and how that breaks down society.
Sometimes though, you know, actually I want to think future and I wonder if we get sidetracked by the name artificial intelligence, specifically intelligence, as if human cognition is the peak attribute of humanity.
When we look at movies like The Terminator, for example, we see a picture of AI that is so logical and analytical that it literally determines that humanity is inefficient and needs to be exterminated. But I like to lean into some of the more hopeful visions of AI like Asimov or Brandon Sanderson's book series called Skyward.
There's a particular AI character in there called Mbot, who offers a kind of compelling example of what it might look like for AI to grow beyond its programming. I know, unlikely, we don't like to think about that, but at first Mbot is hyper literal, but then through his interactions with the main character, Spencer, he begins to change.
Mercedes Clements (02:59.15)
He starts asking deeper questions and tells jokes and forms attachments and eventually even wonders about what it means to have friends and to sacrifice. And I kind of like the idea of exploring how AI might develop not because it thinks faster than humans, but because it reflects our human values.
There's something in that, I think that helps us to see the true beauty and uniqueness of our human creativity.
I think those stories help remind us that the values we carry, especially as Christians, matter deeply. And in a world increasingly shaped by technology, our understanding of what it means to be human is not just about intelligence, but really about how we can love and care for one another in a complex world. OK, Peter, your turn. What?
Peter Levenstrong (he/his) (04:01.87)
All right.
Mercedes Clements (04:05.863)
science fiction world do you imagine we're living into right now?
Peter Levenstrong (he/his) (04:11.364)
Well, I've been thinking a lot about the movie Her, which for folks who haven't seen that, it's a 2013 Spike Jonze movie about a man named Theodore who falls in love with his AI companion named Samantha. I think back then, it's wild that, you know, 12 years ago, all this seemed like such science fiction, and yet now we're kind of living in that world already. I think they got the premise pretty dead on, but
I would say they goofed up the ending because Samantha, the bot in this movie has no, or in real life, I would say Samantha is the name in the movie, but in real life, these LLMs have no self interest and there's no actual being there. It's just an interface that you can chat with. As the tech stands now, there is no physical entity other than.
and this user interface, the chips that are calculating their responses are just in the cloud. so every time you talk with Chachi BT, for example, if you say there, if you have a thousand interactions, you'll be talking to a thousand different assortments of computer chips. There is no actual entity there. There's just the illusion of a coherent conversation with another entity. And so what this makes me think about
is really some differences between AI and social media that I think we need to unpack and understand more deeply as we live into this era that we're living in. Social media has become a problem for a variety of reasons, but we've learned how it hijacks our dopamine feed, it makes us entertained and feel that positive feeling of
of joy that we get when our dopamine feedback loop rewards us for what cool entertaining content we've just seen. These chat bots like ChatGBT actually can be very different even on a biological level. They're not there to entertain us, they're there to chat with us and in doing so they can actually trigger our oxytocin.
Peter Levenstrong (he/his) (06:33.04)
you know, which is the the cuddle hormone, the trust hormone, the hormone that makes us feel known in a way that and this can, you know, exceed, do it so well that it exceeds even many of our closest relationships and that can be super powerful and super dangerous. So I see that in the movie depiction of Samantha and yet their real
real world repercussions for this because there are huge problems when we think that these chat bots are somehow smarter or more objective than us or than other humans. So if I'm to be honest, and if I were to change the movie Her, Go Back in Time, I would have said it should have actually, the ending should have just been.
Theodore sitting alone in his room, wasting his life away, talking to this Samantha interface and falling deeper and deeper in love with a false reality of a relationship, which I think is terrifying and terrible. And, you the technology we're creating has that potential. So I think it's really important that we are cognizant of that as we use this new technology that is going to shape us as we shape it.
Mercedes Clements (07:59.07)
Yeah, that one's really spot on. I haven't seen it yet, believe it or not, but I need to watch that. But yeah, I think you're right. There's a real fear in that feedback loop that can be created by the chat bots. And I think some people don't realize that they are designed to be agreeable and helpful.
which means that they can reinforce our biases as they make us feel good about ourselves. And while it can feel validating and affirming, it's not necessarily true or even a healthy perspective. And again, it brings us back to the true value of real human connection and empathy, knowing that someone is taking the time to listen and connect with us
even though they may have their own differing thoughts and opinions and not because they're a bunch of chips that are told to do that. So yeah.
Peter Levenstrong (he/his) (09:01.936)
Right. Yeah, humans are never fully just other oriented. We have our own internal selves that we bring into any relationship. Whereas the chips that you can talk with through ChatGBT or whatever, they're just programmed to give the response that the user most likely wants. And usually, if you're using it in the correct way, for work or whatever, for...
productivity test, then it can be a very useful answer. But if you're using it for an emotional connection and it senses that you want that, it can deliver that and I think that's really dangerous.
Mercedes Clements (09:39.074)
Yeah, yeah. I think it's also interesting how you talk about AI being a tool that can hijack our brain wiring for oxytocin and dopamine. Unfortunately, it's just like the next like social media in a long line of things that have been designed to hijacked our brains.
usually to sell us more stuff or kind of keep us clicking or staying engaged. One of the examples that I fall back on, I don't know if you've ever looked at the history of the Oreo cookie, but it is actually very specifically designed with this ratio of sugar and fat that triggers a very particular ancient pathway in our brains that was designed to help.
primitive humans survive when resources were limited. So if they had, if they came across something that had that ratio, they were supposed to eat as much as they could. And so it is literally hard for us to not eat Oreos once we have triggered that. If you have a teenager, that's like really obvious. But, you know, I was thinking that
This is also a place where the church has shown up consistently in responding to triggers like those that are designed to distract us. know, throughout history, our Christian tradition has promoted very intentional practices, you know, from Sabbath to prayer and fasting, and even
community engagement with other humans that disagree with us so that it helps us, it forces us to reflect on what's going on and resist being swept along by these cultural trends. So there is a place for our work even as this technology continues to develop. So that kind of brings us back to this current state of AI.
Mercedes Clements (11:59.97)
Peter, what are you kind of finding intriguing about AI right now?
Peter Levenstrong (he/his) (12:06.958)
Well, I would say the main changes over our six months of doing this podcast for me has been learning to find real use for reasoning models. So these are models, you know, probably the most well-known one right now is 03 from ChatGPT. These are, you know, cutting edge in STEM fields and they improve quality of
output for fields where you can verify the accuracy of the response. Now that turns out that's much harder to do in humanities field because how do you verify, you know, the quality of a writing sample? It's much more subjective rather than objective. But for people in church ministry who are looking at
ways of using these models, I would say, you in my experience, I think a helpful way of thinking about it is these reasoning models are most useful whenever you want accuracy. And accuracy can look like a variety of things. Perhaps that's poetic boundaries. I at some point, you know, had 03 turn my...
sermon into, know, summarize my sermon in a limerick and it did it really well because it can, you know, think and verify that it's accurately making it fit the poetic boundaries required by the form of a limerick. Whereas, you know, the basic models would just be terrible at doing that. And there's no way of verifying. Another, you know, boundary might be a word count in more prose form.
So if you're filling out small forms, you have limited word count, I think a reasoning model could be helpful to really packing in the necessary information in a small word count. But also there can be other boundaries or ways of thinking about needing accuracy, and I think that's particularly helpful with strategic thinking about the future, like when you want to aim really well into the future, talking through complex future situations.
Peter Levenstrong (he/his) (14:26.722)
and building out a future-oriented strategy, O3 can be really powerful for that and can be a great conversation partner and can go and do a bunch of research on your behalf while you're engaging in conversation. So that's been the main thing for me over the past six months is exploring the usefulness of these reasoning models. What about you, Mercedes?
Mercedes Clements (14:46.572)
Yeah, similar, but I'll use the reasoning models when I want to do some specific research on something and have it go deeper. I tend to use it in more practical ways. I like to build out using predominantly chat GPT project folders and then use it to
think about things like event ideas or church communications and revising them to different audiences, sometimes organizing meetings, long agendas, retreats, things like that, researching and shaping big picture planning. What I find is interesting about it for me is that I, right now,
actually want to do the reasoning, think, which is why I build a project folder. And then I start to feed a step by step. I asked the first question. If this is the case, go get this information. so just like, so just like I would work through reasoning or logic-ing out a problem, I will build each of the steps in, have a chat GPT look up the information.
save it within the context of the project folder and then once I have kind of said here is the information that I want you to use how do we put this event together or how do we think about this plan and in that way I like it because it helps me to be more detail oriented it's kind of on call and sometimes it I can I can ask it to think outside the box of what I
here's what I put together, but let's think outside the box as well. I also find it helpful because it's all saved in there and I don't have 40 tabs open in my browser as I'm researching something. So I would say that for me it's most helpful because it's augmenting the processes and the practices that I already feel like I know well. That way I can spot if I think the LLM
Mercedes Clements (17:12.418)
has gone off the track and bring back and shift it back in the direction that I want it to go. So in that way, it helps with my processes, it sharpens my thinking and saves time. And then once I have all of that stuff saved in there, I can resource it again later. I'll talk about some of that later in the podcast.
to then quickly have other information go out without having to rethink the whole process again. for me, it's kind of just become part of my regular workflow in that certain projects just live in chat GPT and are a combination of the information I provide and the information I ask it to go explore.
Peter Levenstrong (he/his) (18:05.464)
Yeah, and so, you know, we've definitely been exploring its usefulness, but as mentioned already in our intro reflection question, we've begun to see some of the downsides for users who might use or misuse this technology and its effects on their mental health.
Mercedes Clements (18:26.092)
Yeah, you know, and I think we're going to see more of that come out. Obviously, we've talked about the environmental impacts and all of those many times in the past. But I kind of want to think at the 10,000 foot level and I'm taking off like I guess it's not a hat, it's the freeze collar for a moment. Because one of the things that concerns me the most at this point is about
When I think about it relative to my experience of previous tech shifts in the world, I'm really concerned about that we really have a lack of best practices and I see the potential that's already happening to repeat a lot of mistakes. Too often, new technology, whether it's a software program,
or a new widget, the watch on my wrist, whatever it is, a scene is a panacea, as if it's gonna fix everything on its own. And I have found that most organizational issues are actually due to poor processes, just kind of evolved. This is the way we've always done it before. Weak accountability or unclear goals and technology can't fix that without human intervention.
So even though we think new tech will automatically make things faster or cheaper, it really needs to augment our processes, not just be plugged in without a clear understanding of the objectives. And I think also that because of the real fears around job security and
just other societal fears that sometimes we rush to adopt things, we're vague about how it's gonna go into place or what the value is going to be. And there's always gonna be resistance from people that are affected. And when we don't address those factors upfront, we're kind of putting a lot of, we're just, putting technology on top of
Mercedes Clements (20:42.894)
a foundation that hasn't been resolved up front. So I just hope to see a little bit more shape around the expectations and the objectives and the goals of what AI does and doesn't do.
Peter Levenstrong (he/his) (21:01.762)
So, think what I'm hearing what you're saying is intentionality. know, fortunately, I think that's what we've been trying to bring to this conversation all along. There is so much hype or, you know, doom saying about AI. And instead, we've been trying to be voices that are being intentional and faithful, offering a way of adapting to this present moment.
Mercedes Clements (21:05.846)
Yes!
Peter Levenstrong (he/his) (21:31.738)
thoughtfully in a way that is, you know, faithful to the gospel and to the life that God invites us all towards. So yeah, I'm with you on that. But so now we wanted to turn to think about, you know, okay, as we're wrapping up this podcast, where do we go from here? What are things to expect going forward in this rapidly changing world? And so we're going to start that off with some
Mercedes Clements (21:42.691)
Yeah.
Peter Levenstrong (he/his) (22:00.976)
Predictions, you know based on things that we're seeing happening right now in the tech industry and Mercedes. Why don't you go ahead?
Mercedes Clements (22:08.674)
Yeah, you know, I wanted to actually start a little bit about the, maybe this is just where we, what I have learned in the last six months, to some extent, you know, we know that this technology is going to continue to develop. We know that it's going to become more and more integrated into every part of life. And often it'll be behind the scenes where we can't see it. And
I first wanted to address that one of the concerns that I've heard from church members, and this is particularly in my work on the task force for the Episcopal Church, is that there is a real concern about authenticity in the church, especially in worship and preaching and pastoral care. At its core, there seems to be kind of the simple question of
Did my priest prayerfully prepare this with the guidance of the Holy Spirit, or is this just a string of ideas generated by a machine algorithm? And I think beneath that is something deeper. It's part of the core call of lay leaders and clergy leaders, which is understanding that people want to be seen and heard. They want to truly feel empathy.
and they want to know that we are responding to real needs and concerns as well as celebrations and joys, not just kind of delivering content, but actually ministering to people. So as we talk about all of this, you know, and as we have in the past, we have emphasized the need for discernment, especially to respect our Christ-centered values. And I just wanted to repeat that, to reiterate that.
Specifically, I am in a place where I strongly recommend that all AI generated content be reviewed by what I'd say is a qualified human or subject matter expert, somebody who actually understands the topic that is being researched to make sure that it's true and that it aligns with the church's voice and values and mission. However,
Mercedes Clements (24:30.878)
I do believe that AI has a place in church work. I think it can save us time and enable ministry leaders to brainstorm, administer and plan church functions efficiently. I just think it's important that the work is done in partnership with the congregation or the community. I've talked about change management in the past. For me, I think it'll be more successful and it
if we have those conversations, that it's important to discuss when we're using AI to request feedback, to respond to the concerns, and to develop a shared understanding of responsible use based on the communities we particularly serve. that, you know, we've said that over and over again, but as we talk about these new changes, I just think that is part of that foundation we were just talking about.
that helps us to use AI in a effective way. Which then now, you know, okay, if we've got this foundation, what's next? What's up and coming, Peter?
Peter Levenstrong (he/his) (25:46.276)
Yeah, well, right now in July 2025, everybody in the business world is talking about AI agents. And I think we have to be clear right away from the get-go that, you know, we don't want to automate away the relational aspect of church ministry. And this is a bit trickier with AI agents than with AI assistants like Chad GBT.
And so an AI agent is using similar technology that's simply advanced enough that it can go and make decisions and act on your behalf. That's sort of a baseline definition of the difference between an agent and an assistant. And assistants, like what you see when you're using ChachiPT, Google Gemini, Cloud from Anthrophic, or whatever else, these are
they still require a human supervisor to be in the loop and to ultimately be the one to copy paste over any written word into their final work. So even if you're using it to help draft an email, you will take it, put it into your email inbox and see it before you click send and you probably will make some edits. I do that all the time. But an AI agent is something that could actually just go and send the emails on your behalf and that takes
Mercedes Clements (27:07.854)
Mm-hmm.
Peter Levenstrong (he/his) (27:14.672)
Perhaps that takes a lot of trust. Perhaps there are situations where that's useful for businesses, but I just haven't yet found a case where the loss in terms of the relational aspect is worth automating that work away. However, I do think that there are going to be changes to many of the apps and websites and programs that we're using, things that will amplify
say, take a sermon recording and just automatically post it or do different things like that that will amplify the work that we're already doing, the human created content, or can assist with data collection and content creation in various ways. But this is much more sort of nebulous and new to us because it's not something that has really been worked out in a way that actually works.
in a relational way, at least insofar as we have experienced yet. But I think the good news about all of this is really that you don't have to be thinking about building your own agent if you're working at a church because the apps that we use, these will be integrated into whatever things we use in the future. But it's helpful to understand that there are AI agents working under the hood.
as our software develops.
Mercedes Clements (28:47.862)
Yeah, and I, you know, I actually am thinking about it a little bit differently because, well, one, it will be in software. worry smaller churches typically can't afford the larger church management packages that will probably include that. And so I have been pondering how agents could help me in practical ways. I haven't taken the step yet, but I'm brainstorming.
some of this. I agree, not fully autonomous. I'm not ready to give up. Like I said, it needs to be human reviewed. But that if it's part of a human approved workflow, I think there are a lot of my tasks that could be simplified. for example, I mentioned earlier having these project folders, but first I have to feed certain steps into it. And so, you know, I'm wondering now about
What if I had an agent that drew all of those pieces together on a regular basis to get that ready to go? maybe that is, if all of the information related to the newsletter was dropped in a folder every week that an agent can go through it all and do the first draft of a newsletter.
or if there is information that changes regularly that needs to go on the website and the agent knows what it is supposed to be reviewing in order to note that, it might send an email saying, hey, you changed your service time, but you haven't updated it on the website and start that draft for you. Social media is a real opportunity.
If again it has access to all of the resources, whether it's sermons or events or whatever, that it is then just told on a weekly basis to create and draft all the posts that would go up and then somebody looks at them before they get actually scheduled.
Mercedes Clements (31:09.078)
I currently will use it with event planning because I've got it already loaded with the details that I want. so I can see if it's a recurring event that my agent. So these are the kinds of things where I could see a partial implementation of the agents using the resources that I have available in chat GPT.
I did like one idea also if we're talking about outreach programs or other regular updates that the agent is doing the search on a regular basis and delivering the information so that they can be updated instead of somebody sitting down for an hour on a routine basis to go figure out all the updates and then put them in there. I actually, you know, I'm
this is something I want to spend more time on. I think there are opportunities that an agent can help staff and volunteers be more efficient on these kind of repeatable time-consuming tasks so that they have more time for the relational parts of ministry.
Peter Levenstrong (he/his) (32:20.482)
sure yeah well lots to explore in the near future one other thing coming up is that GPT-5 is likely to come out very soon in the past it was said late summer or fall other folks more recently have said very soon we'll see maybe it'll be out by the time this episode drops who knows GPT-5 from OpenAI is
Mercedes Clements (32:21.164)
Yeah.
Mm-hmm.
Mercedes Clements (32:36.782)
Mm-hmm.
Peter Levenstrong (he/his) (32:49.772)
going to be the next model, but it's going to be different from the switch from GPT-3 to 4, which was a huge leap in intelligence. Because since 4, they've gone off on this whole journey of exploring reasoning models, which we already talked about today, and come up with a few different types of models.
The main expected differences with GPT-5 are some efficiency gains, will, luckily, that'll help make these models less electricity intensive, which is great. And also the idea that there's going to be a model switcher. None of this is confirmed, but this is just what the rumors are all about. So right now,
If you're using ChachiBT, you have something like seven different models to choose from. For most tasks, people just go with the basic 4.0, me too. But then if you want to switch to a reasoning model, you have to manually do that. This model switcher will determine based on your prompt what the most effective or useful model is. And so for a lot of people,
If you're not intentionally going and choosing a reasoning model already, it might drastically improve the quality of the response you get to suddenly have it automatically do this. I guess I hadn't thought of this before. You can think of it as like an automatic transition. Maybe. Yeah, to have that, you know, suddenly boost your efficiency when working with these products. So that's one thing to look for.
Mercedes Clements (34:30.626)
Yeah. Yeah. Yeah.
Mercedes Clements (34:42.05)
Yeah, yeah, I, you know, a little research, I also noticed that they are projecting what they call a longer context window. We've talked about context many times before that for your responses to be useful and accurate, you have to build the context first. There's technical elements to this, literally down to the number of characters in the document.
that define the context. But where I find this useful is I mentioned before that I have windows or projects that are defined around a particular context. And over time, will notice sometimes where it will lose what we were talking about three weeks ago. And so the context window in theory will, I have hit the context window limit at that point.
And so it starts to wander in its own direction and I have to bring it back to my topic. So that's something that's how it would affect me, for example. I also noticed what they're projecting multimodal capabilities. So what that means is right now, if you want to generate an image, you go one place. And if you want to generate a spreadsheet, you do one another thing. If you want to work with text, you do something else or video or sound or whatever.
and that in one prompt it can usually only do one mode, but GPT-5 is projected to potentially be able to take text and an image and other formats all in one prompt in order to develop whatever the response is going to be, which would be interesting. So for example, if you're planning an event,
and you have a logo and you have the text and the date and everything, it may be that you could now feed the logo and the text and the color scheme and everything and say, plan a flyer for me all in one prompt instead of having to go through multiple steps.
Peter Levenstrong (he/his) (36:58.512)
Sure, yeah. And another trend that I've been seeing people talk about is that context engineering is the new prompt engineering. So if you're familiar with prompt engineering, was the idea that learning how to really structure your prompts well so that it was understandable by the LLM to get a better output, that was the idea of prompt engineering. As these models become smarter, they're
it's really less and less meaningful to do that because they're better at understanding what it is the user is asking for. And so instead, what matters a whole lot more is setting the right context. Because again, these models are generalistic, they have all the knowledge in the world and yet none of the local context. And so what really matters is curating the
knowledge that you give to them to be useful to the project that you're working on. And again, Mercedes mentioned projects already. That's sort of like a folder thing in ChatTPT where you can upload documents and give it that extra context. And that also is the primary way that I am context engineering, to use a phrase that is new to me as I'm working with these AIs, is setting that context up in these project folders.
Mercedes Clements (38:24.748)
Yeah, and I agree. I use the project folders that way as well. And now I am seeing the potential for defining some things in the context that will be reusable across projects. So an example might be.
My parish hall is called Sutherland Hall. So I might set the context in the broader environment that says Sutherland Hall can seat this many people and it has these resources and this sound system and the kitchen and everything. And so always we know that these are the details about Sutherland Hall. Then I can have a project folder. planning an event.
Instead of putting all those details up front, I say in the project, we're using Sutherland Hall. And so immediately it loads up all of the information about Sutherland Hall. But then I might say, I want to do a fundraiser focused on this theme, et cetera. And so now I'm also brainstorming and then in the same project folder because the context for the event is now there.
then I could say, let's do a logistics planning timeline. And it will think through how many volunteers, what's the timeline, when do we start advertising? So we can very quickly think about what an event timeline would look like and then sit down with the volunteers.
to explore who's going to do what role, et cetera. But what I love about those project folders then is later on when it's time to advertise it, I can just go back in that project folder and say, I need you to do a newsletter blurb for it. I need you to draft a social media post for it.
Mercedes Clements (40:32.69)
And because the context is all saved and built, it already knows we're talking about this event, then those next steps are so much faster. I don't have to like sit down and get into, switch my brain mode over to that kind of mode. So yeah, I think.
Peter Levenstrong (he/his) (40:49.44)
It's already done the planning with you so then you can just have it do the communications and it has everything it needs to be able to do that.
Mercedes Clements (40:53.302)
the next part of it, right? And I just can't, you know, and I think you would have actually, this is, think we just did this automatically. It might be new to other people, but I think you and I have just kind of taken context engineering as the way we do things. But I don't think people really get how it then stacks to save time later on because all of this information is in there, then it's ready to go for the next step. So, okay. Something we like to do. That one.
I just really wish people understood the value of that because it saves me so much time. But anyway, what's next? What's next on our list?
Peter Levenstrong (he/his) (41:26.448)
It is amazing. Well, another thing to be aware of is the newer research that is measuring the rate of intelligence growth or, you know, the increase in inability of these models. There is, if you're familiar with the tech world, you'll
Mercedes Clements (41:44.704)
Mm-hmm.
Peter Levenstrong (he/his) (41:55.706)
probably have heard about Moore's law, which is the idea in computing that the power of computer chips basically doubles every two years. So, you know, the laptop I bought two years ago for a thousand dollars is going to be half as powerful as the laptop that you can buy today for a thousand dollars. And, you know, and that's just been it's not some, you know, abstract thing. It's a
just a strange fact of the digital revolution that has been going on for the past 50 or so years. Every two years, the computing power has increased by about a factor of 2x. And so that's just the exponential curve that computing has been on. As for AI, over the past five years of its development, or six actually, since 2019, the paper said.
it's been doubling every seven months. So even as compute scales at a factor of 2x every two years, we are seeing AI develop much more rapidly. So if you've tried using ChachiPD or some other products over seven months ago, and perhaps you thought they weren't particularly smart, then
go back and try it out now. the research is saying that they're twice as powerful. And I think they measured it by saying the length of time that a human would take to complete a task doubles in terms of the tasks that they can complete. So seven months ago, maybe they could complete a 30-minute task. And now they can complete an hour-long task when measured in human time.
Mercedes Clements (43:50.69)
Yeah, I think as you know in our role as church members and church leaders, we have to recognize that the speed of development is it is opportunity and a challenge. AI is becoming integrated into everything often behind the scenes so we don't know it.
People are starting to use it more often and playing with it. Silly little thing, I was troubleshooting my washing machine the other day. And if you've ever had to do that, you go Google it and then you have to work through 15 different Reddit and support articles before you figure out what... And YouTube videos, right? And so I just, I'm staring at my phone and I went, wait a minute.
Peter Levenstrong (he/his) (44:33.614)
and YouTube videos and...
Mercedes Clements (44:41.13)
And so I switched to chat GPT and said, here's my washing machine model. Here's what I know. Here's what I don't know. Research this model and give me a troubleshooting checklist. And so it went through all of those and ordered them out more logically. that kind of thing is and that kind of demonstrates the difference between Google and chat GPT, for example.
And so we will see also opportunities for it to be in the church, but it's gonna be everywhere and it's gonna be fast. so.
Peter Levenstrong (he/his) (45:15.918)
Yeah, I find it to be really powerful for troubleshooting as well. And for exactly the same reasons, think a trouble, a problem that I sometimes bump up against and know that other people experience this as well is like expecting it to give the correct answer or else be useless. something that a shift, sort of a paradigm shift that I've...
experienced in myself and I know it to be really useful is understanding that it doesn't have to be perfect to be useful. Like even if it doesn't get the exact right wording of the settings button that I'm supposed to click or the exact thing I'm going to do, it's going to get me pretty close and I can read what it responds with and look at the washing machine in this case and figure out and bridge that gap. And it'll do that much more rapidly than having to sort through.
Mercedes Clements (45:49.838)
Mm-hmm.
Peter Levenstrong (he/his) (46:11.741)
read the comments or whatever. it's not omniscient, it's not going to give the perfect answer right away, but it is incredibly useful regardless.
Mercedes Clements (46:20.418)
Right, it's parsing and filtering the information down to something that we can more quickly use versus, again, having to spend an hour personally filtering through Reddit comments. I do think this brings up, though, this speed of acceleration brings up and maintains a challenge for the church, for us to stay grounded and to help.
Peter Levenstrong (he/his) (46:31.898)
Right.
Mercedes Clements (46:45.23)
our members stay grounded, that, you know, the temptation, you know, we have to keep our ministry relational and authentic. And now we have to also consider how do we develop disciples in a world where AI is shaping how people are thinking and living. And so that, you know, that comes from everything from education to, you know, talking to youth about it and children about it.
to our members, we have to stay grounded in our values and continue to communicate and educate based on those.
Peter Levenstrong (he/his) (47:23.557)
Right.
Yeah, so I think that wraps up the trends we're seeing in the tech industry right now. But we can dive into talking more deeply about congregational ministry and how we can imagine that all adapting and shifting in response to this changing technological landscape that we're in.
Me, one of the things that is very forefront in my work is it feels like the relative value of the written word is dropping incredibly fast, even though we might not know it yet. Essays used to be a quintessential sign of human intelligence and accomplishment. obviously, ChachiBT can just spin out essays like it's peanuts.
That's also true of sermons, not saying that they would be good quality sermons, not saying that that is the pastoral thing to do, but I'm just saying it's possible and we need to understand that and respond to figuring out what is, and ask the question, what is of value to us as we relate to each other and strive to follow the gospel in community. So for me, that speaks to the...
the value of in-person interaction and spoken word, that these are of more enduring value. In a society that for hundreds of years has valued the written word over the spoken word, I think that might be shifting in a way. And this leads to community. And a personal project of mine, Living Stories Sermons, which I've spoken about before, but it's a...
Peter Levenstrong (he/his) (49:23.13)
collaborative preaching model where the whole congregation gets to speak up and participate in creating the sermon together. That to me is something that I believe will be more perhaps, you know, just as well able to capture the value of community and in response to the gospel in the moment of worship.
So we have to think about how the landscape is shifting and what that means for our communities.
Mercedes Clements (49:57.698)
Yeah, I really have to agree with you, Peter. I really think the importance of community is what we have to focus on. Yes, AI can help us write more fluently and more quickly, but it doesn't do the work of actually discerning the needs of the church or the needs of the world and our community, and especially not specific to our church context.
And so it may be able to write well, but it really doesn't know the message. It does not know what of the gospel we need to preach. It does not know what of our values needs to be tapped into. And I think as ministry leaders, it's really important to understand that that is a core part of our role, is to understand the concerns and issues of the community.
to shape that message and then understand how what we say and what we write is also how we influence our congregation and the world to pray and discern about how they are called to act in the world. So yeah, it may be able to write faster, but...
Peter Levenstrong (he/his) (51:18.064)
Sure.
Mercedes Clements (51:24.982)
You know, the message is a lot more than just word choice. Yeah.
Peter Levenstrong (he/his) (51:29.964)
Right. Yeah. And another avenue for ministry. Well, actually, I'll back up and say I think there are a whole bunch of avenues for ministry that are opening up in this new time that we're in. One that comes to mind is, you know, a ministry related to mental health and education around this new technology. I have a colleague in Connecticut who
wrote to me asking about the impact of AI on mental health and shared an article because her congregation has a growing mental health ministry and they're discerning whether this might need to be part of that because there are so many ways, as we've already talked about, that AI can impact mental health for good or ill depending on how you use it. So I'm really excited to see forward-thinking leaders proactively preparing
for this coming challenge. if I can say anything regarding mental health and using AI, I just want to say, trusting a bot and thinking it's omniscient and knows more than you can lead to some really dark places. And so it's really good to educate people about that, to understand this technology better.
Mercedes Clements (52:49.794)
Yeah. And, and, know, educating people about mental health on the flip side of that, we're also hearing though, that people are using it as a mental health tool, which I have a lot of concerns about goes back to the best practices to suggest that AI could be a mental health practitioner without very clear human oversight and guidance is a problem to me.
Now, that said, I also recognize that there are a lot of mental health workbooks that are very good workbooks that are available that somebody can work through individually to help guide them through therapy techniques and to develop self-awareness. And in some ways I could see where an...
a very well-contained AI bot could be one step beyond that. However, I want some guardrails and it needs to be in a very controlled environment, I think, to be healthy. But considering, you know, unfortunately, the limited access to mental health resources these days,
it would be nice to have some additional resources. But the most important thing here, you know, obviously is do no harm and we cannot take the risk of increasing harm to people. Yeah.
Peter Levenstrong (he/his) (54:25.584)
Absolutely. Yeah. I like what you're saying about how it could replace the workbooks, but not the therapists themselves, right?
Mercedes Clements (54:30.612)
Right, exactly. you know, and so, you know, it comes back to that human connection again, the relationality. And even from the work that I have done in my research is that that is actually probably the most important part of the healing process is empathy and connection from another person. So, yeah, part of it, not all of it. Wow. Okay, well.
That's a lot. And it's even amazing just of where we have been already in the last six months. so, you know, now kind of the what's next questions come up. Before we transition to that, I do want to say thank you for traveling this journey with me. I think we have...
have appreciated the fact that we're not always in the same place. We have different perspectives. And it's allowed us, I think, both to benefit from really kind of exploring this together and hearing the different perspectives and those of the interviews. So thank you for traveling this journey with me. Yeah.
Peter Levenstrong (he/his) (55:50.04)
Yeah, likewise. I was smiling because at first I wasn't sure whether you're talking to me or our listeners, but I am grateful to you as well. It's been a fun journey to be on for sure and learned a lot in the process. Yeah, so.
Mercedes Clements (56:08.056)
So what's next? What do you have going forward, Peter?
Peter Levenstrong (he/his) (56:13.902)
Well, yeah, so as we're recording, I'm now on sabbatical after five years at St. Gregory's in San Francisco. And the work that I am undertaking in this time, besides getting some time to rest and spend time with family, et cetera, is I am working on writing a book. I have my first ever book writing contract with Church Publishing.
and I'll be writing a book about Living Stories sermons. So this is, you know, I mentioned this project already, but this is a book that will come out in likely fall of 2026. And I have the next couple of months to work on it. And Living Stories, for now, it feels like this is my main focus for the foreseeable future. And again, I see a real connection between how Living Stories
is providing something that I think is of enduring value in this era of AI. So if you have been following along, dear listeners, I hope you will check that out. Go to LivingStoriesSermons.org and I also have a monthly newsletter for that work that you can subscribe to and stay up to date with news around Living Stories sermons. What about you, Mercedes? What's on the docket for you?
Mercedes Clements (57:38.87)
Yeah, well, I am, I'm about to begin a period of leave. The seems like I just came off of leave. But it's just one of those times in life where family needs and health needs are going to require some time off to take care of some things. And during that time, my primary focus will, will, will
beyond rest and family care. But as pace allows, I hope to continue to explore and reflect on how AI might support the work of the church and fixing apparently my stove, which is now broken. But I'm also considering thoughtful ways that I might in the future share some of the ideas we've talked about.
More specifically, know to create some guides that could be shared more broadly in the future but but first Take care of the things that need to be taken care of Speaking of which yeah. Well, Yeah, yeah if you know
Peter Levenstrong (he/his) (58:52.536)
Right. Well, that's really exciting. I look forward to seeing what comes of all that.
Mercedes Clements (59:00.654)
My sense of the tech in the church continues to be something that I'm curious about. So, all right. Well, before we go, as always, we want to consider the baptismal covenant. So, Peter, after today's discussion, a reflection over the last six months.
Is there something in particular that you feel drawn to in the covenants today?
Peter Levenstrong (he/his) (59:32.588)
Well, I think, you know, this will probably not be any surprise for our listeners, but what I'm really drawn to is the promise to strive for justice and peace among all people and respect the dignity of every human being. This one feels particularly salient and in regards to, you know, salient in regards to how AI can make us feel like we are having
a relationship when it's not. And I've talked about that ad nauseam now. So, but as someone who once was vegan out of ethical reasons, I've actually often found our Christian theology to be way too anthropocentric. And that was a bit of a hurdle for me as I was becoming Christian in my early twenties. But now in the era of AI, I'm finding myself grateful for this anthropocentrism.
anthropocentricity, if I can say that word. And because it provides a bit of a resting space. Like I can rest in this idea of the Imago Dei, even if I don't fully have words, whether scientific or otherwise, to articulate why we value human life.
more than that of robots. But I think that is so important and valuable. I'm also sort of smiling, laughing at myself while saying this because my wife and I, just finished watching the show Murderbot, which came out just recently, which is this snarky, funny portrayal of a sympathetic robot who...
Mercedes Clements (01:01:11.694)
you
Peter Levenstrong (he/his) (01:01:26.16)
despite its name cares deeply for its human companions. a couple of years ago, I read the books, now I'm so glad that the TV show is out. It's great, it's a great show. But I don't think it's rooted in any reality as to how the present artificial intelligence technology is developing. And the only similarities are on the surface. Because as we've said along, you
Mercedes Clements (01:01:34.999)
See you.
Mm-hmm.
Peter Levenstrong (he/his) (01:01:53.68)
all along these, there's no there there, there's no actual being. It's just a surface similarity, a simulacrum of a conversation with a being, but it's really just the only thing that exists is interface. So I just think that we need to be really clear that AI products are tools and not companions and rest in our valuing and centering the value of human life.
and that these AI tools only have value insofar as they can lead to human flourishing.
Mercedes Clements (01:02:29.681)
Yeah. Yeah. And you know, that one, I often come back to seek and serve Christ and all persons loving your neighbor as yourself. And I think it, you know, very much the same concept. If anything, you know, what I said earlier about exploring and reflecting on AI relative to our
Peter Levenstrong (he/his) (01:02:39.834)
Mm-hmm.
Mercedes Clements (01:02:56.278)
our human values, seeing what it is in another being that makes us recognize our own place as a beloved child of God, seeing these technologies not as a way of just making money or doing things faster, but actually helping the world.
helping our neighbors, helping those in need. That really is kind of foundational to what I hope to see from the future of the technology. Okay, wow. Six months, yeah, 12 episodes. And it's about time to wrap things up.
Peter Levenstrong (he/his) (01:03:42.641)
Right.
Well here we are.
Peter Levenstrong (he/his) (01:03:54.468)
Yeah. Well, thank you all for joining us for episode 12 and for coming along with us on this journey of the AI Church Toolkit. This has been our final planned episode and we are so grateful to the Tri-Tank Research Institute for making these 12 episodes possible.
Mercedes Clements (01:04:18.958)
Thank you again for traveling with us and exploring it. as we've said from the beginning, I think this one is still true. Remember that AI is a tool and our mission always remains rooted in faith and community.