A podcast from the ETEN Innovation Lab exploring acceleration in Bible translation. Tune in for experimentation, updates, and conversations about new methods and technologies advancing Scripture accessibility.
Isabella Scarinzi 0:00
Hey, welcome back to the Bible translation innovation podcast, a show brought to you by the E 10 Innovation Lab. My name is Isabella, and I am joined here today by my friend clappy Joel, who you might have heard before in previous podcasts. He's not joining us for this episode, but we do have a special guest, Ian Lindsley from unfolding word. So Ian, thanks for joining us. We would love to learn a little bit more about you today, so maybe you can share how you got started in Bible translation and how you serve at unfolding word right now?
Ian Lindsey 0:43
Yes, so how I got started in Bible translation? Well, first I was, I started as a software engineer. So I have a comp sci degree, BS in computer science. And then some years after that, my dad started in seminary school, and he started giving me, you know, started to slowly prod me, nudge me to take a shot at seminary school. He was talking about Hebrew and Greek, which sounds interesting. Then I took my first Hebrew class. Fell in love with it, and so then I had this dual journey going on, where I'm working as a software engineer and also going to seminary school by night. Eventually, I got my MDiv. And as I was getting close to the end of my MDiv, people started asking all the typical questions, what are you going to do with it? Are you going to teach? Are you going to preach? I'm like, no, no, I don't know. And I was praying. Then I stumbled on Wycliffe Bible Translators. Their website made, you know, it says something like, you know, 4000 5000 of the world's languages people groups don't have Bible translations. And I was just like, pretty floored. I just did not know that. And I thought it was pretty crazy. And at this point, I'm already pretty fascinated with linguistics, and, you know, the biblical languages. And I was like, Man, if I could just find a job where I can code, innovate, help solve the Bible translation problem, and then to condense the story. Eventually, a friend of mine was working for unfolding word. We met, and he was he was hearing how I just hadn't been able to find my way into the Bible translation world, and I didn't even know it, but he was basically interviewing me. And then started, I started working for unfolding word for I think I just just celebrated my four year anniversary, and so that's my that's my story. That's how I got into Bible translation.
Isabella Scarinzi 2:29
Yeah, that's awesome. Thanks for joining us today. So we with that. We wanted to take some time today to address the use of AI and software development, which is a little bit of what Ian has been doing over the last couple of months, and especially in the context of Bible translation, of course. So we know that there's still a lot of questions surrounding artificial intelligence, machine learning, they're used in assisting translation, or in this case, assisting software development for Bible translation, and we hope that today's conversation will help further demystify a few things for our listeners. So Klappy and Ian, we know you guys have had an interesting journey over the last few months with using AI and software development. Can you share that a little bit with us today?
Klappy 3:17
Sure, it's kind of hard to know where to start.
We it we, we've been on a fun journey of, I would say, rooted in skepticism. We've had this promise at AI and hearing about vibe coding and people just cranking out apps left and right and and then we see how those those vibe coded apps, spectacularly fail, whether it's their security model, the scaling problems. This is like people created these apps, you know, not knowing how to do any type of software development. They have an idea. They just start using these new vibe coding tools, and they crank out this new this new game or this new tool, and then they launch, they make their first million dollars, and then the app crashes, and they kind of disappear into obscurity. So we kind of been following along with that, and for me personally, I want to check it out like so for the past two years of AI promising to do coding, I just couldn't figure out how to make it impactful. Like, I try it, I'm like, this seems I can't figure this out. Like, just wasting my time. It's taking too much time. It's so frustrating. Like, I could have done that so much better. Like, okay, cool. You can print hello world, or you can you can do this in isolation. You can write fancy Fibonacci sequence, whatever. You can predict prime numbers, whatever, whatever these silly little things were that people were doing in isolation. It can crank through benchmark scores. That's great, but how does it help my daily life? And then one day, some new state of the art models were released, and I was like, You know what? I'll give it a shot. I was shocked, like I was like, what? Oh, my goodness. I was scared for a moment, because I think today is the day I realize I cannot fall back on my software development career for very long. This could be the end of my fallback career, like I had an existential crisis. So with that, that doom and gloom intro, Ian Tell me a little bit about your
initial grappling with this.
Ian Lindsey 5:48
Yeah, I can't even remember exactly the exact time when I first, like, chat, GPT, like, stumbled on my radar. It had probably been out for a while. Can't even remember who the exact person was that said, you know, which everybody kind of hears for the first time. Have you? Have you heard about chat GBT? At some point, somebody told me that I think it might have been one of my developer friends or something. And I started playing with it. And I started, you know, asking it to do, kind of like you're talking about little coding tasks here and there. And at first I was, you know, mildly impressed. I was just kind of a cool parlor trick, or maybe a little bit more than that, yeah. But then, I mean, especially during that, that time period, it's just like new models are coming out. They're like, the models are getting better at such a rapid pace, and without even knowing it, I'm just starting to use it more and more, and to the point where I'm like, Hey, this is actually pretty cool. You know, I can throw code in there and just say, hey, find the problems with this code, and it would do a pretty good job. Or, let's say, hey, code this function for me and it would do a pretty good job. And so I started to use chat GBT more and more, kind of as my my little unpaid intern, just kind of with personal projects, and even then some some also with working projects where I would use it to help me debug or just like do parts of a project. But it was still kind of slow, especially because I was using the chat GPT interface. And then I can't remember clappy exactly, I guess maybe five months ago, we I was down in Florida, and you were there, and I saw the way you were using it, which was a lot different than the way I was using it. And not only that, the way you were talking about it, like some of the stuff you were just saying where it's like, Hey, man, this thing might replace us, or this thing can 5x coding and so, and there was a disconnect a little bit between my experience and my perception of AI and how you were talking about it. And I was just super skeptical, right? I was, I came with questions left and right about hallucinations, which is when, when llms can just kind of make stuff up. And how can we trust it? How could it ever produce production code? And that's kind of where our journey started. With me seeing you just crank out code in Florida. Are better yet? Your bots cranking out code in Florida, Florida. And just, I was like, I was like, torn. I was like, this is never gonna work. But I'm also just watching it before my eyes kind of crank out code, and I already had some background. Knew about clappy like, I know that. I know his background, so I'm already coming from the perspective of, I'm gonna trust whatever this dude says. So when he's telling me that Ian, like, you're not thinking about it the right way, it immediately opened up the door where I was like, we need to form some type of mentorship. I need I need more one on one time to try to figure out, you know, what are these problems that clappy seems to have already found the solutions to? More so why is an experienced software engineer, slash Software Architect, slash all the things that cloudy has been, who I know would have all the same thoughts that I had in my head. Why is he starting to give this thing the green light and thumbs up? And why am I not what don't I know that he knows? And that was kind of where the mentor, the mentorship process began.
Klappy 9:18
Yeah, hey, I remember my light bulb moment, like, just past the doom and gloom, like, oh my goodness, this is actually capable a in this pursuit. Every company, AI company, was dropping a model that was blowing away the previous benchmark, right? And so they're all claiming, hey, on these programming benchmarks, we do 50% better than the previous state of the art model, right? This is a never ending game. So I was chasing that, right? Okay, maybe this is it, this is it. This is it. Then finally, oh my goodness, it is it. And when I saw that, hope i. It was also a shift in my thinking, because in this iterative testing, I realized they all had different personalities, and I started noticing a pattern to where AI kind of acts like my software developers that I've managed in the past. I'm like, this problem is not new. Like, I can't get, you know, I can't get this model to do this, but it's really good at front end design. Or this model is really good at infrastructure, but, my goodness, it is terrible at front end design. Like, it cannot do anything pretty. It is so ugly, but no matter how I script it or prompt it, it won't approve. So I was like, what if I started managing AI like, software developer like, what if I started writing GitHub issues? Or at least my prompts were closer to writing out user stories and building it out just like I do sprint planning for a software development team. So when I started looking at myself at managing software engineers, but instead of giving these software tickets to people, I gave them to anthropic anthropic cloud Opus to do some further planning, and then I'd hand off the better planned or better architected issue over to sonnet and so seeing how they were functioning as more like individual personalities, individual people. That is where I went from. This is an existential crisis to Oh, wow. I feel like we can actually leverage these to augment and complement an existing team and accelerate what we're doing together.
Isabella Scarinzi 11:55
So it seems like I've been hearing a lot of the skepticism that both clappy and Ian, you guys had when you first heard about using AI and software development, I heard about hallucinations. I heard about the fear of losing my job. What are some of the other maybe common fears that you have heard from your colleagues out there about using AI like this?
Ian Lindsey 12:21
Yeah, I mean hallucinations is a big one, right? I mean that, which is just a more specific version of, how can we trust AI, but definitely from, yeah, it's depends on whether we're talking about software engineers are talking about some of my colleagues at in the Bible translation world, it's kind of, it's still all centers around trust, but for software engineers tend to have more specific concerns. I mean, even lately, I've been, I'm embarrassed to say I've known about Reddit for a long time, but I've kind of lately fallen in love with Reddit. I just started using it recently, and it was, it's really been cool from for what's happening right here, because of it helps me to get, like, the pulse of what software engineers are saying about AI. And it was just fascinating to me to see the like, wide range of experience that software engineers have had, like, some of them think it's just, like, almost worthless, where other one, other ones probably talk about it almost like, how, how clappy. And I talk about it now in every and everything in between, but a lot of it, yeah, the big fears are hallucination, security risks. How can we trust this code in production? How do we know it's not going to just make some code up? What do we do? Or then, obviously the will this replace me comes from some engineers. That was some of my initial fears. Once, once I saw how good it was getting, I was, I was, I was particularly concerned about that. And this is where, this is where clappy and I to, like, kind of skip back to, like, our journey. We started to have a lot of philosophical conversations, which, which I enjoyed quite a bit about, like, Why? Why did I even become a coder? You know what? Why do I enjoy coding? And so for some software engineers, like a friend I have, who's a who's a spectacular engineer. For him, it was a really deep existential crisis that never really went away. Because for him, coding is about there's an aspect of the beauty and elegance of the actual code itself. And so the actual art and craft of producing code is is really tied to his you know, how he sees himself as a coder, and what he enjoys about coding? So if that's like, where your, your identity as a coder rests, you're gonna, you're gonna find AI is a much bigger threat than maybe some other coder, because AI is definitely taking the place of that. It can crank out code, you know, 10 times 20 times, 30 times faster than you can and nowadays, it can crank out code does. Pretty elegant. And so it's like, you're you're completely replaced. And for me, that's where I for me, I was in his boat at first, and it was a challenge for me. It's like, is that why I like coding? And what started to change, and I'm sure we'll talk about this more, is I started to challenge talking to clapping and other people that I trust, I started to really challenge that notion is like, is that really why I became a coder? Or is it, or is coding just the language or the concrete means I use to actualize like ideas like to problem solve? Is it really the problem solving and the idea that I really, you know, where I really thrive. And it just happens to be that I, you know, landed on this path where coding is the best means at this point for me to do that. And so this, at first, it was a super big struggle, and then all of a sudden I'm like, man, what this is really doing is freeing me to be able to operate at that higher level, which is probably where I always wanted to operate in the first place. I just didn't have any means to any means to do it other than coding and and so, yes, it's a threat to it can be a threat to the, you know, existence of the coder. It can be a threat to, you know, job security, if you're thinking about a certain way, hallucinations are a threat that's that's not that didn't go away. But there's ways to mitigate that, and that's one of the big things that clapping and I have been exploring,
Klappy 16:26
yeah, Ian, I everything that I had mentally kind of made note of that I wanted to share you. You nailed it out of the park. So, great job explaining those. And, yeah, it's funny because I remember when I first learned to program, I learned a scripted language, and then people made fun of me for learning a scripted language and not a real language, like without getting too technical, there are there are low level languages and there are higher level languages, and if you don't learn a low level language, to some people, you're not a real engineer and I don't think that's going to be much different 10 years from now, like you're creating software, you're meeting people's needs. And What tool do you get that done? What does it means you get that done? Right? If we're able to use AI to help us make things better, cheaper and faster? Why not? And there's a false dichotomy. And everybody will say, you know, you got to pick two, faster, cheaper, better, pick two and like, Well, that makes sense if you don't change the technology like the printing press made it faster, cheaper and better to be able to recreate books like you could crank out books, more books, they're faster, better, cheaper computers. Computers were invented and then you were able to print digitally. It made printing books faster, cheaper and better. So when people say you got to pick two, and it's just all the quality of software is going down, or it's the same argument that people have been going through with Bible translation, right? Like people afraid of AI to be used in translation. You know, it's an exact same fear of, well, yeah, you may do it faster and cheaper, but it's going to be lower quality. You know, you're just cranking things out. You know? How can you trust the AI, it's the exact same arguments, exact same fears. But with this new technology, we don't have to use it the same way we thought we had to use it when it didn't exist, right? Like Ian and I took months to wrap our head around how to use this tool. You know when when a drill came out, you can't use it the same way you've always used hammer and nails, right? Like new technology means there's different processes, and we're having to work hard to relearn how to do our jobs, how to meet needs, how to solve problems with a new tool set.
Isabella Scarinzi 19:09
So it seems like there was a big shift between you guys just having those fears and those challenges being really evident to now you coming on this podcast and advocating for using AI and coding and software development for Bible translation. So how did that shift happen?
Ian Lindsey 19:31
Yeah, I mean, for me, and I did not want to hear what clappy has to say. For me, the shift happened, because what I started to notice as I was watching clappy. So we started our, you know, kind of mentorship process. We would meet every Friday. We've been doing so without fail, since I think roughly June. We've only missed a couple of Fridays. And one of the first thing I first things I noticed from the very beginning was how clappy was using AI. And the the amount of ingenuity which was going that was going into the way he was using AI, with regards to how he kind of curated content context, and how he talked to the AI, and I started to see that this, this whole new like skill set emerging. So this is part of the reason why the essential crisis starts to go away, too, because away too, because it's not like we're still needed. It's just that there's a shift where, like the some people say natural language is the new coding, and at first I used to like, nod my head, and that is that is mostly true, but it's also even more so. Or like context curation, or like bot orchestration is the new coding. Like learning how to like use bots properly, or use llms properly, how to tie everything together is its own skill set, and there's no books out for it right now. And so I was watching clappy learn that on the fly, and I was trailing behind him learning how to do that. And I think this ties in directly to why you see such a huge range of experience on like, on, on, on Reddit, because some some developers, the developer that's not willing to the developer that thinks it's just like a pretty red button that you can just push and it's just going to solve all your Coding needs, reaches disillusionment pretty fast. The developers that think more like software architects are willing to shift their mindset to software architecture already kind of had that background like clappy or wait, are like, oh my goodness, this is the best thing ever, because all of a sudden you start to see the llms as junior engineers, mid level engineers that you need to write tickets for, that you need to put all the same guardrails in place that you do when you're working with human engineers. And so you start to you start to discover and explore. How do I put the right guardrails in place to to mitigate things like hallucinations, to keep them on track, to to handle things like context adrift. And when you start putting those cartels in place, you start to see, like, magic happen. And that's what I was watching clappy do. And it's just trial and error. Like he would have success. Some days he'd be like, yelling at the bots, right, like, and then other days, he's like, you know, I solved another problem, and I'm watching him, you know, two steps forward, three steps back, eight steps forward. And as I'm watching that process, I'm starting to click in my head like, Oh, this is what this is. This is amazing. This can actually work. And when you look at Reddit, you see you can see the developers where that light bulb has gone off, gone on, see the developers where it's only half on, and see the developers that you know, as soon as they reach that level where they had to do some of that effort, they just immediately call just immediately quit. And they are saying things like, you know, llms don't work is garbage worse than an intern stuff like that. So what do you think clappy?
Klappy 22:52
I mean it to me, it's somewhat comical, and I'm trying to figure out articulate what I find funny in it, but there's looking back, I think what it was so funny to me is like, I can't pinpoint, like, the moment it got good enough, or it got better to address certain issues. There was no clear moment, because the different models were advancing so quickly that we just had to keep trying things that did not work the month before, right, like the things that we trusted or knew we couldn't do with AI. A month later, we tried that same thing on a different AI, and somehow it worked. So we saw the advancement. The advancement wasn't always in that model. And I don't want to get too technical, but when you see a large language model, you know, let's say open AI, drops chat, GPT five, and then all of a sudden, you can do new magical things. Yes, that's true, but there are other times where the tooling around the AI got better, where they optimized because keep in mind these these large language models are like a person getting smarter, but just because you have a smarter person doesn't mean you can properly leverage them on your team, like you still have to figure out how to integrate the team or the person into the team. You've got to figure out how to integrate this large language model into your workflow. And I know this is not our target audience. Isn't all software developers, but maybe we can go more technical today than we normally can, but what we had to be willing to do is bang our head against a wall on issues, certain problems we can't just accept. Well, we can't do that with AI and then never try it again, because trying it again a month later, you. We were able to actually exceed our expectations there. There's so many times my mind was blown at what I was able to do, and if I tell you what they were, you almost laugh at it, because things have advanced so much now that Ian and I kind of take it for granted that you can do this kind of stuff. And we forgot we couldn't do it just two months ago. And the latest thing I experienced this was just this week, or, sorry, last week, when, when you see chat GPT, and you use it, and you ask it to create a document for you, and then you continue to have a conversation, and it creates a document on the side, and it starts manipulating the document like to me, that seemed like black magic. I thought only open AI can do that. I thought only you know x ai and anthropic had the internal understandings of their models to be able to create a canvas that you can directly manipulate in a conversation. Or, I'm sure there are other people been doing it. I see notions done it like I thought it was like huge investments and huge development teams that can pull us off. Well, a couple of weeks ago, I just asked an AI, is this possible? Can you help me plan a way for me to create a chat based tool that allows us to have a conversation, and it manipulate a data table to render data and create dynamic filters and dynamically generate charts on the fly for me, for not me, but for The user of the tool, and so to see an AI, create another AI interaction that drove tables and generated charts dynamically for a user, so they could make any ad hoc query in natural language, then you could, you could have anybody come in and create a report and get charts and tables, and they don't have To know how to use the software. They just ask the AI, the question. So my mind was completely blown. I thought that was just black magic. Now we may look back six months from now and laugh at hearing listening to this podcast, because now everybody's doing that kind of stuff, like, that's going to be second nature. Like, I believe all the stuff that is blowing our minds now will just be commonplace.
Isabella Scarinzi 27:29
Yeah, so you guys were, you tested, you failed, you tried again. You saw the technology improving, and you saw some advantages to do this. So I want to hear from you, what are your clear advantages of using AI in such a setting, and then maybe Ian, you can also talk about how you practically tested these things out, maybe in the BT servant tool that you've been developing as well.
Ian Lindsey 28:01
Early on, you heard stuff like, yeah, AI can, like, you know, 5x 10x coding. And I was super skeptical when I would hear that stuff. I didn't, I didn't believe it at all. And for a while I just wasn't seeing that type of multiplier where I could, like, produce 10 times more code than I could produce without it. I'm in the same amount of time. And that's because I was, kind of, I was using chat GBT and watching clappy and just like, I mean, it was really cool, because, like, I know later we'll talk about how developers can, like, kind of get onboarded to this kind of stuff. But I was fortunate that I had I was trailing behind clappy because I it allowed me to, like, sidestep a lot of his pitfalls. It's like, he's just running this experiment. I'm just like, getting the data from him, and following him behind, following behind him, and skipping all the stuff that didn't work. And so that's like, that's just like, gold. So I'm really appreciative clapping. Was like, at the edge. He wasn't trailing behind somebody else, right? So he was doing a lot of trial and error, and error, and I was like, skipping some some of his errors. And so once I got introduced to both Codex, I know this is once again, I'm not trying to be too technical, but Codex CLI or command line interface. And then also, now lately, Claude, it made it so that now, literally, I can have like, multiple windows up on my computer. It's literally, like, these windows on little developers, and I have them coding on different tasks in parallel, while I'm like, leaning over talking to my wife about something that happened at church, like when that stuff started happening. And I would even tell my wife, I was like, you know what's happening right now on my computer? She was like, what? I'm like, I got 10 coders going away, and that started to really blow my mind. And so with the right guardrails in place, which, which we've been talking about, takes a lot. It does take a lot of trial and error to learn how to properly put the guardrails in place, and everybody's flow is different. So that's another kind of variable, like, what are you trying to add? Actually accomplish, because the guardrails will be slightly different for you from another coder. But once those are in place, and once you find the right tools, you can truly have this parallel 5x sometimes even like 10x workflow going on, and that's just like gold, especially for a Bible translation organization like unfolding word, it doesn't have some huge team of coders. So one of the reasons why I also started trying to follow behind clapping and solve this problem is because we don't really have that many coders in a folding word, but we have, we want to innovate in the in the coding space, and so we have a couple of options, right? We can go try to hire a bunch of people, that's not going to happen, or maybe try to get some volunteers. Or we can see if this AI thing can work, and we can, we can multiply ourselves that way. So I started to see that that was, that was working, and then, in particular, with BT servant, which is, yeah, quick explanation of BT servant. It's a, it's a whatsapp chat bot that basically helps Bible translators right now, understand, understand the Bible better understand the passages they plan to translate, so that that gives them so through this conversational interface, you know, WhatsApp, they can leverage underneath the hood is being leveraged Bible translation resources like translation notes or even just things like commentaries or Tyndale notes and things like that. That allows, allows a translator to ask something like, like, who was Enoch, or tell me all the translation challenges in John one, one where they get, you know, with the word the servant responds with notes about Greek challenges and other types of challenges and so that. And the reason why the conversation we've been having is relevant, is because to code that by myself in any normal amount of time was just, was not going to be possible. And so it's it's funny. I said clapping this screenshot, I don't remember when I said it to him, where at the beginning of BT serving, if you just look at like the get commit history, you see Ian's name right in all these, these commits where Ian is like, writing code, writing code, writing code. Then there's a clear line in the sand where all of a sudden it switches to Codex assistant. Codex assistant, my bot. I mean, it was like, it was like flipping a switch. And from that point forward, whether it's Codex or whether it's Claude, both AI assistants that I use, they just took over. Like, I no longer. I can't remember the last time I completed a ticket or fixed a bug or wrote a new feature myself. My bots have totally taken over. And you can just see the like, I mean, they're literally the amount of commits that I would be able to do in a day versus how many they are doing. It's like 4x 5x it's just dramatic. And so it has direct application to any, any Bible translation organization that doesn't have, like, a whole team of coders that still wants to innovate in the tech space for helping translators translate the Bible better.
Klappy 33:00
Yeah, that's great. Ian
Isabella, you asked, What are some of the advantages of using AI and developing software? Personally? Ian, you had touched on this earlier, personally, the biggest advantage I have is it when I when I'm in the zone programming, I don't want to say my personality changes, but I shut off the emotional center of my brain, and I'm focused on logic, and I'm thinking in code, and it's like my brain almost enters this alternate reality, and I live and breathe that code base to I feel like I'm immersed in it, and I'll forget to eat, I'll forget to get anything to drink, like I'll lose sleep. I could go for like, two or three days, like, with no sleep, skipping meals. Like the world outside of my terminal does not exist. Well, that's one of the reasons why I haven't been able to get into developing software on the side, because I knew, with my role and capacity that I was doing for the lab, I just didn't have the time to switch my brain into that mode to be able to do anything productive. So for me, for me to invest into that switch, I have to be gone for days or weeks into another reality, and then I can emerge on the other side with whatever it is I built. Well, when you're using AI, you get to have the creative juices of what you want to see as the outcome, and you talk to an AI about how we plan it together. And then you let it or another AI go execute it. And then you figure out together how you're going to test it, how you're going to how you're going to define the definition of done, how you're going to distribute it like what. All the types of things you would do normally for software needed, but really you just have conversations. So it keeps the communication center of my brain alive. And I if there's probably only a few people that will understand that, you know some some engineers like they really get irritated when people interrupt in programming. And people think some engineers are just irritable people. But I think it's just that I'm in a different world, and I just got sucked out of that alternate reality, and now I've got to deal with something in my brain can't process, like human interactions, and I don't have to do that anymore. Like I don't have to switch between that zone, if you will, when I when I'm developing software now, now I just get to think of like I was if I was managing a team. It stays more on the relational side of things. Yeah, I get frustrated with the AI when it does stupid things, but I got mad at people on my teams when they would do stupid things. So, like, in some ways, like, I get to remain relational. Focus on that. And so I think that's the biggest advantage.
Ian Lindsey 36:11
Yeah, if I could just touch on one additional thing is that
it's super cool to me. And like, I discovered about myself, I mentioned it a little bit before, where I discovered that my real identity is problem solving and ideas like, I'm the guy that's like, I've always pictured myself as a mad scientist, and I've realized that that mad scientist persona really has little to do with the actual tool I'm using to execute I had to, I had to, like, detach from this idea that Me being a mad scientist meant, you know, all my identity is in the actual, like, actual code and like, once that detach, that once, once that attachment happens, it's like, now I'm operating at the level of ideas. And this is like, it's so hard to like, to put into words how amazing that is so one of the ways that like benefits about Bible translation organization is that so clappy and many others have had me read a book called Lean Startup. And so the whole idea of being able to innovate and quickly, like quickly punch out proof of concepts and prototypes is baked into this idea of the lean startup mentality. And if we're, if we want to innovate in Bible translation, try a bunch of new ideas really quickly. Nothing is better than than AI like we can just think it right before it's like we think it. And then after we come up with the idea, we have to think about, Well, how long is that going to take? We have these other four or five great ideas, but we have to pick one, because we only have this amount of time that's just completely gone. Why not have aI knock all five of those ideas out while you eat lunch, and then you compare and see which one is which one is good, and decide to either pivot or persevere on one of those, one of those four prototypes, and have it do four more. And so this idea of being able to rapidly produce POCs and prototypes just kind of skyrockets the speed of innovation. And that's something that's this greatly needed, and not just the Bible translation space, but really any space where you're trying to innovate and create software for users.
Isabella Scarinzi 38:18
Yeah, that's spot on. Yeah. I was gonna ask we we talked about BT servant. Maybe you can just provide a quick description of what BT servant is, and how did it fit with this framework that you're talking about on being a problem solver and then continuing to explore and experiment with BT servant as a tool. Yeah.
Ian Lindsey 38:40
So BT serving is a whatsapp bot, right? So the Bible translators, you know, just they, they use it over WhatsApp. One of the cool parts about that, obviously, is that, I think the last time I checked, 2 billion, 3 billion people are on WhatsApp. So we skipped the whole part of, hey, get used to this new tool, right? It's just, hey, use this tool you've already been using, and then it's a conversational interface, just like you're if you were texting some translation consultant that was a human. Now you just text Bible. Now you just text BT servant, and it responds like a translation consultant would, right? And so you can ask questions like, What are the challenging what are the things I need to worry about when trans translating John, one, one, or Philippians, one, one, and it comes back using its resources with translation helps from curated resources, or even just who was Enoch, all the things that and we're looking to do more. But right now, the current prototype is just really operating in that understanding Bible, understanding sector. And the reason why, what I was just talking about, how that, you know, plays into this, is that we have Tim Jory. He's on our, on our team, and I'm unfolding word. And he, one day, he was just like, on, on Zula. He was like, you know, what if we had a whatsapp bot that could, you know, Bible translators could use to, like, understand the Bible better to translate. He just, like, posed it as this like question, and because of, because of the AI. Tools that I was using, and I wasn't even at the point yet where I'm at now. I was able to knock that out in like, a couple of days. That's just huge. This shows that. It shows that the gap between idea, which is just some thing that Tim post on zulip, and first implementation of it that people can use and see and touch, is like, shrunk, shrunk considerably at that point, two days. Now, if that same question were asked of me, now, it would be like a couple of hours that was before me learning everything that I've learned from clappy. So that's huge. Now, the idea to concrete phase is just like, really, really small, and it's hard to put into words how valuable that is for any group that's trying to innovate rapidly?
Isabella Scarinzi 40:43
Yeah, that's awesome. So we're talking about things in a little bit more practical way. Now, if someone's listening to this and they're like, I want to start doing that. I want to try, try out AI for coding. Do you have some tips? How could someone go on in that journey as well.
Klappy 41:00
Yeah, that's an excellent question, and I would say this is probably the most consistent answer I would have had on any of your questions from the beginning of when I started this to now, my workflow is somewhat consistent. I start by having a conversation with chat GPT. And the reason why I do that, I already have an account with chat GPT, and it's the first place I go when I have a conversation challenge my own thinking and my ideas. So when I have an idea of something that needs to be built, I just go talk to chat GPT. I ask it to interview me so ask it to interview you about what you would like to the problem you'd like to solve and how you think you would solve it, and let it walk you through what needs to be built. And then have it describe it, ask it to generate a prompt. Ask it to generate a prompt that you can hand to another AI to build it. And when you do that, you can first drop it into chat, GPT, in a new thread or new chat, and ask it to build a proof of concept of a single page app. And it'll build it right there. You don't have to leave chat GBT, and you can touch it, play with it, and it won't be very pretty. You know, I mentioned some AIs are good at design and some are better at infrastructure and different things. Well, chat GPT, right there, without you leaving anywhere, can build a single page proof of concept that will be 80% of what you talked about. And within 20 minutes to 30 minutes, you can go from idea to touching it within minutes, and it's mind blowing. So then you can escape outside of you see the limits there, and you copy that same prompt, and you have a conversation with another tool, and you figure out which tool works for you, because the tool I use is different than what Ian uses. I use cursor at the moment, but I've hopped through about 10 different tools, right? So Ian, he's hopped through a few, and I can guarantee you, by the time this podcast gets released, I'll have a different tool that I'm camped on, because I've left cursor three times, but I came back. I don't know if I'll come back next time I leave cursor, who knows what I'll be using?
Ian Lindsey 43:26
Yeah, I will say definitely.
Developers trying to learn how to use AI to speed up coding and to be able to ideate and prototype super fast. One of the biggest things is first. You have to accept that, at least right now. And maybe this will change, maybe, maybe clapping, and I will start to turn out a bunch of documentation, things like that, and other people will do so. But right now, I think that the best method is still trial and error, like, come up, come up with some type of dream scenario in your head of how you would like things to be like, what if I could code five times faster or something like that, and then start exploring the various tools that are out there? It's no it's no even, even with clappies guidance, which did help me, help me to center center, like, start figure out where to start my search, or what things to kind of leave out from the beginning, I still had to try cursor. I tried cursor for a while, and then ended up leaving cursor. And then I had to try, I tried to Codex cloud, and then Codex CLI, and then now, now I use both Codex CLI and Claude CLI together, and those are my two developers. But that was a lot of, a lot of trial and error. And so you have to, you have to be have some level of perseverance, otherwise, you'll quit too early, like that's one of the things I see a whole group of on Reddit as just people that quit too early or have have a different way of thinking about AI than than what's helpful. So I would say, be willing, be willing to do trial and error. Start to view interacting with AI more like interacting with human. And then deterministic code like that was another big shift. When I saw the way clappy was working with his AI AI bots like he would work with a software engineer, and then you start to have better expectations. You don't you don't hold the AI to unrealistic expectations. So it's going to make mistakes. But the question is, is it making less mistakes, or even if it's making the same amount of mistakes, you got to remember this thing's remember this thing's coding while I'm doing something else. And so still, still a huge win. And so be willing to interact with AI in the way that makes sense, given what it is. And then, yeah, and then be willing to move up a level of abstraction, like, don't try to think about why you really like to code and what, maybe just give it a shot. What is it like to be more at the software architect level, more at the even Product Manager product level, where you're like coming up with new ideas and things like that, and see if you enjoy that space. Because if you do, then you're really, really going to love coding with AI,
Klappy 46:04
yeah, I think another thing to help you get past the initial frustration to me, I I was ready to throw in the towel because I got so frustrated at The repeated errors, and
at first it felt like 75%
of all my prompts and the responses and the things it did I had to undo. And then as I got angry and frustrated, I'd have to walk away. And then I had this moment when talking with Ian, it's like, what if I gave it a personality? So like, I was like, let's try that out. So Ian and I were talking, we tried Spock and a few different things. I've done it to where it plays out, like scenes from movies. So like your whole code base is like this new world that it's describing, but it gets a little distracting. So I ended up using my favorite comedian. And so he, you know, if you if you know Nate bergazzi, he plays dumb all the time. And so it's perfect for an AI that makes mistakes. Like, when it makes mistakes and you get upset, it just kind of rolls it off of like, well, I'm not a coder. What do you expect from me? And it's hard to get upset when you have a little bit of humor in there. So I do recommend giving your AI a personality that you have a hard time getting upset with, and you'll you'll notice, you'll communicate better to the AI get better outputs and outcomes from it. And I saw it flip when I started getting less angry, and the technology got better, and all the things, I feel like the ratio is more or less, like, only 20% I'm throwing away, but that 80% is happening so fast, like even it when it was a 75% I was thrown away. I felt like I was able to do so much more, it was still worth pursuing and learning. And I think, as a final thought that I would want to share is just the reminder this is the worst it's ever going to be
like a year from now,
this is going to seem like we were playing with, I don't know, punch cards, looking at the rate of progress year over year in the area, of how well an AI has been able to program alongside of us and with us, I feel like we're in for a great 2026 you know, seeing what we're able to do with AI,
Isabella Scarinzi 48:44
yeah, that's awesome. I just want to say a reminder to our listeners, because Ian was talking about acceleration numbers like five to 10x acceleration that we've seen at the lab and with our partners, like unfolding word, we don't we're not just accelerating for the sake of accelerating, we really have the all access goals in mind, and we're trying to in all different areas of the Bible translation movement to see how we can innovate. So my invite is also for the people that aren't coders, aren't software engineers. How can you see innovation in your own setting, in your own role as well. And did you also have any final thoughts you would want to share with us?
Ian Lindsey 49:28
Yeah, I mean, I would just say, if you're, if you're for the software engineers out there, if you've tried it before and you've quit, give another another shot and think about any of the gripes you have with it, specifically coding mistakes, or the fear of it producing risky code. Just think for a second that that's the same exact fear you have whenever you're working with human software engineers, they make coding mistakes. There's a. Never, yet. I've yet to meet a software engineer that doesn't it produces perfect code, and I've yet to meet a software engineer that never produces risky code. Like, if, if we're being really honest, many software engineers back in the day would literally, you know, something that they didn't know about. They would type it into Google, and then, you know, various websites that we would use, right and copy and paste. And so a lot of the things that we're worried about AI doing, you have all those same risks with humans. And so the question, the weight which you should be asking is, what would I do to mitigate the risk with humans? What are the some of the things that we already do to mitigate the risk of humans. What are the tests we have in place? It's a lot of the stuff that a lot of developers hate, right? Like tests, unit tests, a lot of the stuff in CICD that we have that before I used to hate because it's just like extra stuff to do. Now it's become my best friend. So dive back into software architecture books like, what are all the what are all the guardrails we put in place for humans. And what if? What if I put those same guardrails in place for AI, and put them in place and see if a lot of the things you're worried about go away, or at least, the rate of those things occurring drop so low that it's comparable with a junior engineer, and from what I've even been seeing, sometimes a mid level or even senior engineer, so be willing to touch and Reddit things like Reddit, if you don't have a clappy check out Reddit and see what people are trying and specifically with with regards to how to mitigate risk and to lower the chance of it producing bad code. And try that's where the experimentation has to take places. In guardrails, you'll see that the code quality clapping, and I can attest, will start to get better and better. Your trust of AI will start to increase more and more because the appropriate guardrails are in place, and you'll start to reach a level of comfort where you're like this. This thing actually works if the guardrails are in place, same guard rails you would already be using. So yeah, those are my last words for software engineers, it works if you use it right, and so be willing to invest in learning how to use it correctly.
Klappy 52:13
No, I said already shared my last words, but
I've gotten to a point to where I don't even look at the code anymore. That's where we're at.
Ian Lindsey 52:24
I remember when clappy first told me that he didn't look at the code, and you can see my journey can be summed up by the fact that when he first said that, I was overtaken by a sense of utter fear and terror, like, What do you mean? You don't look at the code, and now I don't look at the code either. And so no one could have ever told me five months ago that I would ever be not like not looking at the code, but it really just slows me down. And so yeah, it can happen, if it can, if it can, if I can come to that level of trust of AI, and anybody can.
Klappy 53:01
And I'll say the reason why is because when I do like peak, I find it more consistent than anything I've created, which is intimidating.
Isabella Scarinzi 53:16
It's crazy how we went from gloom and doom just five months ago to you guys not looking at the code anymore, right now, that's insane. Well, this has been a great conversation. Thank you for listening in today, and we really appreciate Ian coming on to talk with us about some of his journey in using AI for software development. If you are listener, if you wish to explore this further, you can send us an email at lab, at Eaton dot Bible, we'll be happy to connect with you and share some next steps as well. And don't forget to subscribe to this show, to this podcast, and we will see you at our next episode.
Theophany Media Media 54:01
The Bible translation innovation podcast is brought to you by the E 10 Innovation Lab. This episode is edited and produced by Jake dobrins With Theophany media. Your hosts were Joel Matthew and Christopher Clapp with facilitation by Isabella scarenzi. Please Subscribe on your favorite podcast platform, and we'll be with you again next month.