Join the the Artificial Intelligence Student Organization, RAISE, at the Crummer Graduate School of Business and Rollins College, to discuss the reality and implications of AI in the business and academic world.
AI Pod EP. 7 - Loren Interview
===
Loren: [00:00:00] This is the AI across campus and Beyond podcast.
Today
we'll be
talking with Lauren,
Pristine: Auer Auer,
the CEO of Model Mind, ai, a company that helps high growth businesses integrate ai.
into their operations.
Loren: Lauren is an AI advocate, entrepreneur, and governance specialist.
Pristine: 20 in software development.
I'm pristine, your co-host and president of
the AI student.
Organization raise. I'm self designing a major here,
Loren: in ai.
And I'm Casey, your other co-host and the graduate side co-president of Raises. I also work at the AI Edge Center here on campus. Lauren, could you tell a little bit about
yourself
and your background as
an application
developer?
How
did it lead you to working in the AI?
space? Certainly, ah it's fantastic to be here with you guys today. , So , I actually have an economics degree and I started off as a commodities trader, , at the Minneapolis Green Exchange. And, , in the nights, , in the evenings, I would go over to what you'd call a meetup now at, , Piper Jaffy Tower in [00:01:00] downtown Minneapolis. And we would, , talk AI and how to apply AI to training. . Models to help with, , asset allocation and stuff. And it was very interesting for me. I was always interested in computer science, , but I had never really dug into it , in our office.
A guy introduced me to a software program and I started digging into it and I switched careers, and that was about 30 years ago. I've been doing AI in a lot of projects over the years, which has been exciting. Interestingly, not as AI really, but we were applying ai solutions to business problems, and I was basically a business consultant.
And we applied technology to that. Yeah, it's been a transition since then. I have at home I have a whole bookcase just dedicated to AI books and I was very interested and fascinated and just dug deeply into AI since then. And I've probably, I think my first AI project was about 1998 and all the way to today.
So it's fun to come into something when now it's the hot topic and it wasn't always the hot topic.
It's exciting to [00:02:00] get there. So
Must be rewarding.
I. It is. And it's one thing to understand ai, but my skillset is really how to apply things. How to apply technology. And so it's exciting to be able to go apply that. What we actually focus on at Model Mind is mostly on AI training and
coaching. And so we will train up teams of people to understand how to leverage ai, how to think about the risks of ai, and then how to apply that in their organization. So it's exciting to actually be able to take what we've learned and make it productive for other people.
What are some common challenges that,
you might teach, like your customers
about?
Governance is always a big issue,
right? With ai, this is probably the number one thing. When we go into an organization, we have to talk about what are the risks involved in this? And there's lots of societal risks and we don't talk so much about that because that there's a lot of personal perspective that goes into that. That but when it comes to a business making [00:03:00] decisions, what's gonna happen to your data? Who owns your data? Who
owns what goes into this and who comes out? What comes outta this? Those are things that we need to deeply understand and we need to help leadership teams understand.
So that's, we spend a lot of time training on that. And. So that's probably the number one thing is thinking about the governance aspect of this.
In relation to that, Lauren, I know we had an AI on law form here recently obviously we had a lot of lawyers there and they were discussing some of the legal issues. Something I heard there that was interesting and relates to data management as a risk or challenge is even if the organization maintains ownership of their own data, if you're.
Using that data with an AI model. The way the speaker put it is that the company who developed that AI application or system essentially owns the learning. If you wanted to switch from, a more advanced version of GPT to deep seek, you would have to retrain it on all of those data sets because you don't own the learning, you don't own the learning capacity, I think is how [00:04:00] they put it.
Do you have any perspectives on that? Is that a challenge organizations face, or do you think that's a future facing, facing issue?
Oh no. It's a, it's an issue right now. Okay.
But I would phrase that a little differently.
Yeah.
So the problem I would just say is data ownership.
Yes.
And what I mean by that is you have data, and those, that data you're using to produce some result. So you need to own the data you currently
have. But you need to own the data that comes out of it. And that's the data that's getting used for training.
So the number one thing, and we talk about this in our trainings, we require if we're doing Chacha BT training as an example we require you to have a paid account because then you can actually limit what they can do with using that. Information in your, in their
models. And you do wanna own that. And it's very critical that you understand what that means to not own the product that's coming out. It's just not that you don't own, like you can use it, but you don't necessarily own the product coming out. If you have it paid for this, but you don't own. To not be [00:05:00] trained on other models, right? There's lots of stories where companies didn't understand this and they published something into AI and now anybody can search for sometimes very confidential information in the AI because they didn't understand what they were doing.
Interesting. I think I've run into that myself with some, the blue metric research where the names are supposed to remain controversial and it was a really simple task I wanted to use the AI for, but I was cautioned against it. It was just data transformation, basically reformatting a very long Excel sheet.
To work with a visualization software gfe. But yeah, no, if
you're trying to,
Protect academics, privacy or identity, it's a little sensitive to upload everyone's name and information,
To a what will then be what? Public information.
Yeah so I think this gets to, we met at a session at Rollins on small language models, and I think what you were alluding to in the previous question is that. People are going to be more interested in how they're training their models and what models they're training right now that happens.
And lots [00:06:00] of companies are going down this path, but not that many people are really thinking about that. I think over the next year, that's gonna be much more prominent where everybody will have their own custom models. And you'll control where they're running. Yeah.
So it might not necessarily be internally, but it'll be on a cloud system that you control the environment and have access and security around that yourself. So all these things are important to think about when we think about governance and ownership and how to not publicize information. Like confidential information. That's gonna change a lot in the next year. That's gonna be one of the big changes, I think is how we. What models we use.
And it's gonna be a lot more diverse in what models people are using.
Interesting. So you think everyone using a private model
will make it more secure?
Not by default, but because they're thinking about the model creation process, they will use more standard processes for managing where that model exists and who has access to that model, that it will allow them to make it more secure.
Casey: Fair enough.
Loren: We talked a little bit so one of the, lemme just wrap up 'cause this is [00:07:00] one on this topic. The tools the AI has accelerated faster than the tools have accelerated. So there's lots of tools that don't have typical enterprise security around, like how you'd approach as a business would not normally not implement a lot of things because it just doesn't have the security that they would expect out of a solution. But there's so much value in these AI tools that they're allowing it or it's happening even if they're not allowing it.
And what's gonna catch up is the tools are starting to catch up and they'll continue to catch up where they'll have more capability to manage and make better decisions in how they're gonna implement something so they have more control to manage the security and rights and and where something happens. With security
in mind. You mentioned that it's happening under business' noses, whether they want it or not. I think something we, we heard during that conversation is, people will bring their phones to work.
Use GPT on their phone, even if it's not permissible on, a work computer.
Do you see any commonalities between [00:08:00] the sort of.com boom and the challenges companies were facing bringing their systems online and adapting to the internet environment? I don't know what the security landscape was like then, but it does seem like both, people have called the.com boom, the development of the internet, the third industrial revolution.
And the proliferation of ai. The fourth, are companies facing any of the same challenges from what you've seen, especially in terms of security, or is it a whole new beast?
I was doing software development heavily at the start of the boom.
And there was no security. Yeah.
There, there was very minimal security in most systems. And so we've come a long ways in that process. I just think, it, organizations are way, way more sophisticated than they were then.
From a security perspective, from a management perspective, from a developer perspective. So all these things have improved. Okay.
Now how I think there are similarities, but AI is like nothing we've ever seen before.
In the sense that it is a technology solution. But the great thing [00:09:00] is I can go sit down with, a business customer that wants to solve a problem and they don't have to understand technology to be able to go actually create a solution for a lot of their problems with ai, because the AI can help do that. So it's different in so many ways that it's more accessible by just going and having a conversation.
So we always talk about the human side of this. Ai, AI actually makes us more human the more we interact with it. And it's a very interesting thing , there was a study that basically showed the happiness of people working with AI on a daily basis. And it's very interesting because those people that were dialoguing with AI all day, even though it's not real, they actually had more job enjoyment because, they were talking to somebody, right? They were interfacing with a person and they were getting human-like responses, even though they were using AI behind the scenes.
So I think it, there's lots of differences that we have to be aware of, and it's unlike anything we've ever done before. Even though there are some similarities, like there's lots of unknowns. That's all [00:10:00] been true in every boom. I think that's
happened. Yeah,
I
feel like artificial social interaction just sounds really dystopian.
So
do you think it's like a positive thing or.
I think it can be. My, my team was sharing just the other day. There was a research paper done on using AI for for therapists. And it's incredibly powerful and if it's trained properly, it can actually people will engage with it. Here's an interesting thing.
So especially in the development world developers and are notoriously. Don't really want to talk to people a lot of
times Introverted. Yeah.
But they will talk, they'll, they're introverted, but it's more than that. They just don't want to interface with people. But if it's an ai they're open to having the conversation and actually getting deeper on problems. And what's fascinating. We actually see this with, a lot of times when we're training people that they don't want to go ask their boss a question. We've actually worked with a company to build a chat bot for coaching.
Their [00:11:00] team, and they will go talk to that bot gladly over talking to their boss because they don't wanna look dumb in front of somebody.
By asking questions. So from a learning perspective, it's actually a very powerful thing that'll, that gets over some of the in-person
anxiety that people have.
that's really interesting. But I feel like it could harm social development because then
when they
actually have to go talk to their boss, they.
might
not know how to interact with them as much. That's possible. But if you've trained up the solution and this is where I think we need trained solutions for a problem, then I. You potentially can build that in to keep structuring and recommending and coaching them. And so from a coaching perspective, I think it's a great tool.
From an everyday use, you might be right that we need more to it in order to be able to interact in, in human life. And I think nothing replaces human interaction. I agree with that, but I think it's an interesting tool that can help with productivity and feedback when a human. [00:12:00] Feedback is just too strong sometimes, right?
So it'll it actually does help in a lot of ways, and it can then ra, the AI has the ability, like if you're building out a therapist solution, let's say. If it gives us certain things, it can raise that topic up to somebody else, right? So it's Hey, we have an issue here. Let's, we need to bring in a human.
And now how do you do that process from a trust perspective and everything. But there's ways to do that and I think that's what's exciting. People are thinking about this in so many different areas. And it's that dynamic between human inter in interaction and the AI interaction.
Yeah, that's really interesting.
I heard something recently that you've helped me wrap my head around. It's this idea that interacting with an AI increases thought diversity. I thought it might be the opposite, where, we have these AI tools, they're great, they can pull in a lot of information, but they make people all think the same or all, have the same data set as the ai.
But what you described Lauren sounds especially for introverted or people who might be hesitant to, [00:13:00] jump that gap and make the in-person social interaction. If they're brainstorming with an AI
tool,
it pulls in a bunch of different ideas in a safe space, giving
them, and
what I heard is, all these different ideas, this exposure to thought diversity and all these different perspectives and opinions and ways of thinking that traditionally, they wouldn't encounter unless they had that opportunity to be in a very public, a boardroom full of 12 different people with different backgrounds and majors, and an environment where a lot of people wouldn't share or shut down and is honestly difficult to facilitate
nowadays, so.
I think it's really interesting. One, one of the there, there's tools out there. One of 'em that I like is crew AI
and Crew AI is a tool that basically allows you to create different bots or
trainings with different perspectives.
I can actually train up multiple bots essentially, that have different perspectives and different backgrounds.
So let's say you have, take your boardroom example. Let's say we have a bot that is a CFO. Perspective and someone that's a [00:14:00] legal perspective and someone that's a product development perspective, and someone that like all the different perspectives that you'd have in the boardroom. And now start to have a conversation about, hey, what new products should we develop and why?
, You can actually then have these bots who are thinking about that differently, which actually increases. The ideas that come up and expands that diversity of thought quite quickly.
And so we recommend doing that. And even if you're just using chat GBT, just go into multiple chats and start with a different role that
you're going to have an approach and take your ideas and copy and paste in between.
And that works really well. And we recommend then that, recommend that when you're brainstorming because you get that diversity of thought. So it's very easy to do. It's just, there's kind of
some steps to make it happen.
I hadn't even thought about that. When we did have the presentation on prompting from the AI technologist
on campus, she mentioned the importance
of assigning it a role. But, having three different tabs where you say, as a marketing director or student, as a
financial
analyst or a finance student analyze this problem and comparing them.
And I also wanna say, something we [00:15:00] heard in line with what you and Christina have been saying is. The number one use for AI out there so far is brainstorming. But the number two use from what I've heard what was shared at the conference recently is companionship and human connection.
So you're saying that they actually, you get that companionship and human connection even when you're not saying, ask
me about
my day. Even when you're just brainstorming and trying to work with the tools.
And especially if you think about we've worked on some more educational oriented
tools. Yeah.
And so if you think about it from an education standpoint, how do you actually build that trust and relationship in the process of teaching somebody? That process leads to that humanization of interaction, right? So I think that's really interesting. It's not just about companionship, it's about having that companionship in every step of what you're doing in this world we live in. That is, lots of remote workers and stuff. How do
you really get that at work? We work with a client that is all over the country and they're. They don't know. There, there are lots of new [00:16:00] hires and they don't really know people, and it's interesting how they're leveraging AI to help build those connections between real people. But then the AI is the glue in between to bring that together.
So there's lots of possibilities there.
Interesting to see it as a connector, a mesh for these very. Disparate or far-flung companies where you have, remote workers all across the place.
Yeah.
Yeah, I've heard of a lot of different uses for ai, but
I think that's a new one. There's an unlimited number of use cases. That's the exciting thing.
It reminds me what you were saying about the different ai personalities or profiles reminds me of this idea of AI project managers. And I always thought of that as an internal system where,
you have one model.
That's fetching these different requests. Seeing a model that's really good at doing math and asking it to do this component of a task, finding a model that's really good at communication or essay writing, or maybe even grant writing and asking it to do this component of a task.
But internally, you just tell it what you [00:17:00] want and your skill is being able to communicate with that project manager,
AI
agent, and then it's going and fetching three or four different specialized agents. You're describing as a way that you know, not only can you do it now without the advancement of agents that a lot of people see coming, but also incorporates people more closely where, they can see what each different specialized AI is communicating.
Do you see any risks to it being moving if these agents do become mainstream to behind the scenes?
Agents are becoming mainstream. Yeah.
That's happening very quickly. And we're doing a bunch of our own testing on that. I think though the goal is human in the loop, right?
Yes.
So how do we keep the human in the process and guiding the process? That's really where things become power, powerful. As an example, we have a test system that helps write code for us. And I think it's really interesting if you think about the personality behind this and how you give the AI [00:18:00] personality, I have two developers on this bot team, right? And one of the developers is optimistic and one of the developers is pessimistic.
And in my career, I always am very frustrated by pessimistic developers and they'll tell you every reason why something's not gonna work.
I'm pretty optimistic. However, there's huge value to that pessimistic developer, right? And so even though we can define the role, but then we can actually give them this optimistic or pessimistic view to actually help improve the quality, what comes out of them which I think is an interesting process.
But now we connect that to the human that can interject and actually guide the system. So people are afraid of the system guy, going by itself. We inject the human into this. So when we look at a, like my view of the future is that everybody in an organization has a bot associated to them.
That can help them do their job. Yeah. But all those bots can now talk, I. But all the people can talk to their bots,
right? And so now I have, the marketing person talking to a product development person, talking [00:19:00] to a developer, but their bots are talking, but the people are talking and the bots are just helping accelerate the process, but the people are in charge of guiding that and providing feedback.
Into where that process goes.
Yeah
they're a complete unit.
Correct. So if the
AI has so much like capabilities to have
any type of personality. Would you say that makes the need for like real human DI diversity in the workplace like less? I I think we need diversity of thought.
Just like we want when we're talking to the ai. We want diversity of thought, but you want diversity of thought because how you're going to act and interact with something, it matters in as to what your view is. Coming into that. So we definitely want that diversity to apply as to who's running this and how you're managing that. Because that's how you're gonna get better results.
We we spoke recently about AI and cybersecurity, and we encountered this idea of something similar to what you're saying, but everyone having [00:20:00] their own AI personal assistant. And we were discussing the dangers of spoofing and cybersecurity and, how easy it is today for people to create false accounts or, impersonate your mother or family member and.
Where you're using these social lies to convince people to create security risks. Do you see any risk for a company or an organization
if
every employee has their own, AI twin or their AI extension their little partner and idea assistant?
Could those. AI models be susceptible to spoofing and potentially pose security risks since, someone could program an application that would very convincingly pretend to be, say, Lauren's AI assistant, trying to schedule a meeting with my AI assistant. What risks do you see there? I think there's lots of risks, but the risk is no different than whether you're building an AI workflow or any other process. So we still have to think about this, like we we like to say this is a very human process and non-technical people can do a lot of this. [00:21:00] Uhhuh, there is a role for tech people, for security. Every system that gets developed needs that applied because there are risks involved. And we have to think through how. What we allow to do and what we don't allow to do and why. And so part of that is what are we going to give the AI the capability to do so that bot that assistant for you, what are we gonna give that bot? The ability to do
For you and as you. And so the question is, if we're going to let them send the email for you. Okay. Then are you okay with that response? We have to be okay with the types of responses that's that, that the AI's gonna create. We also have to know how to program that to say, here's things you can't do here.
When you're sending email and program that into that logic. Yeah.
Casey: Yeah.
Loren: We have to be okay with a risk. And if you're not okay with a risk, you shouldn't be doing that step. And
then if you are, we also have to put constraints on that to help manage and mitigate the risk involved with that. So there's ways to do that.
. Even in the prompting and what we give the prompting, [00:22:00] my goal is we need to train people so people understand this because this is not something people have thought about before or had to think about before. So our goal is when we're training is to make sure people understand. What you're allowing the AI to do.
I think, even a year ago the expectation was AI's here and it's all going to be like iRobot or or whatever, and I just think that's completely not true. Not that we couldn't do that, we could do that, but there's all sorts of risks, right? So what we wanna do is have a controlled way to actually roll out more advanced systems and take advantage of it. But we can actually control it. It can only do things we give it access to.
I
know we've talked a lot about privacy and security, but are there any other concerns
you
have, ethical or otherwise about
ai?
I know you're a really optimistic person, and you.
seem to be excited about this. I'm very optimistic and I think there, there's lots of risks and they're all around privacy and, data security. I think one of the big risks that people don't talk about, and this is again, data security, .
[00:23:00] When we give access to data I. Let's say we create a custom GPT and I give access to that custom GPT. Now how is that gonna get used? It was intended to be used for a product development process, but marketing got a hold of it and it's providing good answers for them. But there's data inside this that we don't necessarily want shared.
This is a data security issue that is, data seepage basically right? It was intended for one thing and now it's being used for something else and it's not clear for people to understand that. And that's partially 'cause the tools haven't caught up. So there's risks like that associated, but. But that's my big concerns. There's social concerns as well. And Christine, I know you you like to talk about these so there's lots of risks around job loss, right? And so what's gonna happen there? I, for us, we're focusing on how to help our customers roll this out. So I don't get into these conversations as much, but I think it's very interesting as we look at . How are you going to train up the next round of people? 'cause we actually do need more smart people in order to guide these systems. [00:24:00] The problem I see is that we have. A lot of college students right now and high school students who can't find internships because companies are looking at this and say, Hey, we wanna cut some corners.
We're not gonna hire an intern. Because AI is gonna do this entry level work. And that might be true, but the problem is you've just shut off your chain of people like that. You're building up and training on your systems, your processes, how you're doing things. Who's going to become the next expert for you?
If you don't bring 'em in and train up these people. So I think that's a massive risk I see from a societal perspective is we need more critical thinkers, we need more smart kids to come and work on these. But if we're not gonna give 'em the opportunity to actually do that's a big challenge.
That's interesting that you mentioned critical thinking. Another thing I heard recently, and I'm sure we've heard it a lot, is
that that's
main skill that people are going to need to work with AI in this new job market that, that is developing as AI proliferates across all [00:25:00] of the businesses.
That people wanna work with. So how do you see AI and education potentially jeopardizing critical thinking? Do you think that there's a risk, I've talked about this and thought about it, of, if people are being educated with AI assistance or AI tools, it's an an awesome
tool for that. But how do
you ensure that
they still have the critical thinking skills?
How does Model mind when you have these internships focus on developing the skills that people need to work with AI and to think critically when engaging with generative content.
So I agree a hundred percent that we need more critical thinkers, right?
Yeah.
And so the way we do that, we build in a lot of features that are, again, very humanistic. Because it's a humanistic learning is a humanistic process. Yeah. So while the AI is doing a very similar process in the way we're training models and teaching models but it's a human process that we've applied to technology not the other way around.
So we need to make sure we keep that a human process. But the AI can contribute to that to make learners much more effective. [00:26:00] So we absolutely want everybody to use ai, but we train in things that are critical to critical thinking, such as failure. I. So we live in a society that failure is not an option. And I come from software development background and failure is always an option. It, but it's how you fail and what you do when you fail, that actually matters. So there are so many times in software development that you get it wrong and it doesn't work the first try. What do you do to solve that?
We need more people to think like that, and the AI can actually help that, but you need the human. Perspective of how we're going to handle failure. And I think this is one of the gaps in our education system right now, is it's hard to develop critical thinkers if you don't allow failure. So we need a culture that actually supports failure in a way that says, okay, you've failed now, but what's the next step?
And moves us forward in that.
I definitely agree. There's a lot of grade inflation on the public school system. They move kids through [00:27:00] and kids have actually kinda learned to not take things seriously because, if you fail, worst case scenario, you'll redo the class over summer. I've seen schools that are not very well developed, that boast 97, 90 9% graduation rates.
And it is it is an issue. Students feel like they can't fail and it definitely hinders critical thinking. If you look at standardized testing and sort of the AP data
It's becoming less and less common.
, We build this into our internship program
because it's very interesting. Usually week three of our 10 week program, you can't get it done. And it's intentional because you have to learn how to, number one, we always pair for four interns together. So you have to learn how to work as a team and you'll have to learn how to step back and reevaluate when it doesn't work and what can you get to work and then start on those processes. And so it's just, we've never been taught, people aren't taught how to deal with. With these challenges, and some people learn it, but lots of people don't learn it.
So our [00:28:00] goal is how do we help people learn that? Because that's, in my mind, that's the number one thing that we can do to actually make better employees, is learn how to deal with failure so that they, number one, have a little bit tougher skin to say, I. Okay, this isn't the end of the world. What's the next step?
And I'm gonna share that so I can get feedback to actually help take the next step. Versus, oh, I screwed up and now you know, I'm gonna be in trouble. And I don't wanna share that with anybody. So I think it's just a perspective and from a culture we're not very good at that, but it's gonna help. We need to do more training like that, but we can also build this into, if we're gonna build AI systems for education. Let's build some of these core competencies of being human into those solutions rather than focus on grades, I think. So
you think there's gonna be a big mentality?
shift coming into the workforce? I don't know if there is. There should be.
Casey: Where's
Loren: optimism?
Yeah.
We're trying to do our part, but Yeah.
it take, it's, that's gonna be a massive shift, I think. And there's lots of [00:29:00] strikes against it, probably out there in our culture, so
I don't know that it'll happen. But where companies that do this and, it's a massive win because you build employees that are incredibly loyal. They want to stick with you through thick and thin because they're committed. They know you are committed to them in those challenge periods that they're then committed to you as well. You build incredible resiliency by doing that, but it's just, it's a hard thing to do.
Have you seen any difference in the interns you work with just in the past, three to five years since generative AI has come about? Do you see them getting better at working with it becoming more creative, more narrow minded? Any changes so far?
This is probably the one thing that I see. We work with, young people we work with interns, we work with older people that are about to retire and we're teaching them ai.
And I think it's fascinating that if I talk to someone that's 63 and they're all in on learning
this because because they realize they still can learn it. It's not like teaching, learning a technology, it's [00:30:00] learning
how to talk differently mostly. Yeah.
Which is interesting when we talk to like high school students and college students, there are exceptions. We see massive pushback that they don't want anything to do with this. And I think that's really interesting and I don't, we haven't quite figured out why yet.
Yeah.
But there is a massive pushback that either this is cheating
or this is just not a good thing to do. And I think that's, there's all these concerns about it, but they've blown up the concerns more than they are. They're are realistic and they've downplayed all the benefits of it and it's ha it's happening. It's not going away. So I just find it very interesting that we find a lot of younger people want nothing to do with this, and we're baffled by that.
My team is,
Yeah, that's a reversal from normal technology
adoption.
I
think it probably has to do with the media
surrounding
ai.
It's Pretty negative.
Yeah, I think you're right.
Yep.
Do you see negative media surrounding AI on social media platforms? [00:31:00] YouTube. And where do you think it, it hits young people the most pristine?
I
think
everywhere,
Like social media. There's I've heard a lot about the negative environmental,
impacts of ai
or like any movies with robots,
tend to
be like world domination geared. I just feel like there's been a lot of negative
connotation.
about ai.
in the media. Interesting.
Kind of the zeitgeist of the moment is fearful, I guess for the, an upcoming generation. It's exciting to me, but obviously the concerns are there. That's why we have the club, not just the, Rollins AI organization, but AI for social, environmental, economic impact. Framing it as it's the people who interact with these systems that are gonna determine whether it's negative or positive.
Yeah. In my.
opinion, it's gonna be here either way. So
Yeah, like working with it,
is better than
trying to shut it
down.
Yep. And there's lots of issues, right? But we are solving those issues and we always have solved those issues as new technology comes forward. So I think they're solvable. [00:32:00] They are gonna take. A lot of different people to solve. And back to your diversity question this isn't something that software developers solve or business leaders solve or government leaders solve, it's everybody needs to solve parts of this in those different areas, right? So whether it's education, solving it, ai, and education, whether it's different businesses and how they roll this out, what they're willing to do or not do it right.
So the solution exists. We just have to keep moving towards it.
Question Needed
---
Loren: So Lauren, where do you see AI agents heading in the next three to five years, or maybe specifically in businesses? I would love to talk about the assistance and agents. I think how fast things are changing when we started doing training about two years ago.
And I think it's funny, my team sat around, it's oh man, we need to go. We need to go. We need to go because we have three months and everybody's gonna know this stuff.
And we're two years later, and I think we've just touched the tip of the iceberg now. So where I feel like we're going and what reality probably is probably a little bit off,
but this is moving incredibly fast and I think it's interesting organization after organization we're working with now that is actually [00:33:00] rolling out and applying this and the tools are accelerating at such an incredible pace that a lot of these are gen systems are happening. And here's one thing I know businesses have a big concern about agentic systems and Right. Rightfully that's where things start to get crazy and a little bit scary. However, using those systems to create more structured AI processes, like workflows and automations is where we really see a massive value.
So we'll use an agentic system to dialogue to figure out a problem, but then we'll actually use it to create a very defined process as a workflow that can get implemented. Now, what's great about that, there's lots of workflows that have never been implemented because if you have 50 steps to a workflow and every three steps, a human had to do something. What if we can replace maybe not all of those, but a bunch of 'em with AI to actually make a decision and validate the decision, all that.
And now maybe on step 25 and Step 50, a human has to make a decision to validate the whole process, but those are still complicated to [00:34:00] create. Where if we can use Ag agentic systems to actually then build out these other AI systems. So they're structured, we control it, we can manage it easily, but and they're low risk at that point. But we're using ag agentic tools to do that. So rather than having the agentic tools have capabilities to do anything anytime, Yeah.
We're using them in a very controlled way to build out a very structured process. I like that model right now. And maybe that'll change over time, but I think there's going to be a. Tons of AI development that happens this year. Because we're using the, a AI technologies to build the AI tech technologies faster.
. . Lauren, thank you so much for joining us. This has been really an awesome conversation.
Could you let the listeners at home know if they're interested in Model Mind and some of the work you're doing how to find out more information and get in touch with you.
Certainly you can go to model mind.ai as our website and look us up there. If you wanna connect with me. I'm on LinkedIn Lauren Sauger, L-O-R-E-N. And I guess we'll probably post my [00:35:00] information at the bottom.
Absolutely.
But feel free to reach out on LinkedIn. I love to talk about this. I love what you guys are doing.
I love that you're asking the questions. I think that's very important right now. So this has been great and thank you.
Awesome. Thank you so much,
Lauren.
Thank
you for coming
on.