Expedient: The Podcast

In this episode of "Expedient: The Podcast," we embark on a timely exploration into the strategic integration of AI into enterprise planning for the year 2024. Joined by Brad Reynolds, the SVP of AI at Expedient, and CEO Bryan Smith, we delve into a comprehensive roadmap designed to navigate the complexities of AI strategy development. This discussion focuses on five crucial areas every CEO and board member must consider to harness AI's transformative power effectively. From organizational readiness and policy formation to operational integration, our experts unravel the essentials of preparing for an AI-driven future. Furthermore, the episode features a dedicated Q&A session, inviting live queries from listeners seeking to refine their 2024 AI strategies. This conversation is not just about Expedient's journey but a universal guide for any business leader looking to chart a course through the evolving landscape of artificial intelligence. Join us for this insightful episode, offering practical advice and strategic direction to empower your enterprise's AI ambitions in 2024.

Creators & Guests

Guest
Brad Reynolds
Senior Vice President of AI at Expedient
Guest
Bryan Smith
Chief Executive Officer at Expedient

What is Expedient: The Podcast?

"Expedient: The Podcast" is your gateway to the inner workings of technology and innovation, presented with unparalleled clarity and expertise. Each episode is an invitation to join the luminaries of Expedient along with special guests from the forefront of the tech industry. We delve into the latest advancements in cloud computing, the evolution of data centers, cybersecurity trends, and groundbreaking developments in AI and machine learning. This podcast strips away the complexity of the technology landscape, offering listeners an exclusive look at the real stories of challenge and triumph, innovation and leadership, that are driving our digital future.

But we don't just stop at presenting groundbreaking ideas; "Expedient: The Podcast" is about building a community. It's for the IT professionals charting their course through the ever-changing cloud environment, and for the tech aficionados keen on decoding the future of digital infrastructure. Our episodes provide the essential insights and perspectives to keep you at the forefront of a world in constant transformation.

Tune in to "Expedient: The Podcast" for a deep dive into the technologies and ideas propelling us towards tomorrow. Experience the journey through the eyes and voices of those shaping our technological landscape, all presented with the authenticity, insight, and forward-thinking Expedient is celebrated for. This is not just a podcast; it's your insider's look into the technologies transforming our lives.

00;00;00;00 - 00;00;40;05
Brad Reynolds
This is Brad Reynolds, the SVP of A.I. at Expedient. I had the opportunity last week to sit down with Brian Smith, our CEO, and discuss our plan and how we would help others. CEOs and board members plan for their 2024 A.I. strategy. We're going to be going over five key areas. You need to address in crafting that strategy, and we're going to be giving you a Q&A session afterwards of 30 minutes so that we can discuss live questions that you may have about the webinar.

00;00;40;08 - 00;01;12;08
Brad Reynolds
I want to welcome everybody to experience webinar on building your 2024 Enterprise A.I. strategy myself, Brad Reynolds. I'm the senior vice president of Artificial Intelligence at Expedient, and my copilot in this is Bryan Smith, the CEO of Expedient. The overview of this, what you're going to get out of it is operator questions that you need to be asking yourself and your board in terms of crafting and strategies.

00;01;12;08 - 00;01;39;20
Brad Reynolds
This isn't going to be an expedient conversation. It's going to be what you can take away to kind of craft your business for the coming year. Our strategy discussion is going to go over five core points, and these are points that I've found in consulting for the last two years before I came on board with the expedient. These are the steps that everybody needs to go through in terms of answering questions so that you can get to the point of implementing A.I. in 2024.

00;01;39;23 - 00;01;56;07
Brad Reynolds
So with that, I want to ask Bryan some questions. In your in your CEO hat of expedient. What are some of the key organizational concerns you were thinking about when we were crafting an A.I. strategy for 2024?

00;01;56;09 - 00;02;33;02
Bryan Smith
I think a large portion of it is just where does it fit? And I think there's a big concern also for people on the fear of missing out. They're not exactly sure what it means for them to start, but they know that there's a big risk of not figuring out how I fits into the business. And most of the conversations around AI are the impact that it can have for the business, for areas of efficiency, giving insights to data and information that were never really available before to have better decisions and for the business, and also giving insights for opportunities that may have been missed.

00;02;33;05 - 00;02;47;11
Bryan Smith
And so those are some of the core pieces in addition to how can you increase the efficiency in the day to day people inside your organization. Is that similar to when you're working with outside companies and how those trends align?

00;02;47;13 - 00;03;10;11
Brad Reynolds
That's definitely one of the kind of core concerns that folks need to address. What I found is that most folks are they understand the power of AI, but they need to set the stage a little bit of how they're going to capitalize on that. And so some of the stage setting that needs to be done is what am I going to need from my board?

00;03;10;17 - 00;03;30;12
Brad Reynolds
Do I need skill sets on my board? Do I need outside consultants that could help me in terms of the governance around AI and then probably maybe even more important level? Great. We think that there's a strategy here, but who who's going to go implement that internally is are we going to as a CEO, are we going to give it to the CIO?

00;03;30;14 - 00;03;52;06
Brad Reynolds
But who's concentrating on AI inside of our organization and how much are they Concentré fitting it? Or is it just an add on to work that they're already doing, figuring out that kind of component of who's handling it? Do I have board alignment? Do I have board support or kind of like really critical aspects that usually indicate success if you're able to align those?

00;03;52;08 - 00;04;06;23
Brad Reynolds
But one of the other ones that I was interested to get your perspective on is A.I. policy. Like how how important is kind of the rules of engagement and guardrails from a formal perspective for for what you did it expedient.

00;04;06;25 - 00;04;29;01
Bryan Smith
Sure. In the policy leading to one of the major concerns that the board has in there, which is security, said their two biggest questions were how do we do something that we know we're not really leaking our data and getting external access to our private information, but also just where's the right place to start where some of those big concerns from the board perspective?

00;04;29;01 - 00;04;57;11
Bryan Smith
So when we think about the strategy as a whole, a big portion of it is it has to align to where you're going with the business. So you have to think, are you a business that wants to be very pro leveraging inside the business? So you're going to make adjustments in crafting message and or policy for employees that aligned with that versus something that, you know, it could be the opposite, very draconian, where there's no access period, you're going to block all external access.

00;04;57;14 - 00;05;25;13
Bryan Smith
So that's just the base. First thing you have to decide internally. But when we thought about the a healthy policy, it really ties into other data privacy things or data protection pieces of, you know, do you have a good audit trail? Do you know what is being consumed? Is it going to be different for something that we do internally on a private air deployment versus something that's been used from a public publicly accessible tool?

00;05;25;16 - 00;05;49;19
Bryan Smith
And we found that there's even nuances from is this data that's been passed through kind of a user interface from a public tool versus an API, because the data privacy pieces were actually very different, surprisingly different between just how you access the tools along the way as So having those different components, making sure that it aligns with your overall compliance structure that you have for the business.

00;05;49;21 - 00;06;09;07
Bryan Smith
But one of the biggest things was making sure that all of the employees understand that they're ultimately responsible as a human for anything that they consume and use out of AI so that it can't be determined that it's absolute fact because it's coming from the tool that they still own, all that accountability and all that responsibility to make good use of it.

00;06;09;09 - 00;06;33;15
Brad Reynolds
Yeah, I think that's key. There's a lot more nuanced than just saying there's a generative AI policy for our company and we're actually for kind of one of the outputs of this talk. We're going to be giving everybody in the success packet an example of a generative AI policy that you could potentially repurpose to your own that covers a lot of those points.

00;06;33;18 - 00;07;04;20
Brad Reynolds
Additionally, some examples of other folks generative AI policies like the City of Boston, just so you have ideas about what people are thinking about, but so you have a policy in place, you have some board understanding and alignment. You have a person that you know could potentially spearhead some of this A.I. work for you in 2024, The first step of we'll call it Shadow I.T remediation, but or shadow remediation.

00;07;04;26 - 00;07;22;16
Brad Reynolds
But the concept of internal employees at expedient are using chad GPT to you right now even before the policy. So how do you kind of get a guardrail or a control around that so that you can kind of launch in the next year?

00;07;22;18 - 00;07;48;12
Bryan Smith
Yeah, I think those two are really intertwined because you have the policy that is going to set those guardrails in the framework on what's acceptable with not acceptable inside the business, what type of information you could put in. But policy is one piece, but it doesn't really have enforcement unless you're tracking everything. And most of the uses of public A.I. tools, you wouldn't have any observability and tracking to see what's been entered to begin with.

00;07;48;15 - 00;08;14;14
Bryan Smith
So we've been using the term shadow A.I. because it's very similar to the connotation of shadow i.t, where shadow i.t is when other departments really circumvent central i.t and go leverage and do their own thing and build their own application, get their own infrastructure. Shadow is very similar because it's such a consumer eyes product, it's easy access. You could get on your phone, you could get it on your on a browser.

00;08;14;16 - 00;08;40;04
Bryan Smith
And so the policy can be that first area, but as part of the policy enforcement, we also took additional steps for implementing things that would block access to certain sites. But blocking it isn't enough because people are just going to find other ways to go around to get access because there's so much value there. So you really have to give them a really positive alternative option that's more powerful than what they get from a single tool.

00;08;40;04 - 00;09;01;01
Bryan Smith
So we leverage different technologies so that we could encrypt and protect our own data before it goes out to an external model, but also giving them options for internal models that maybe actually be more specific to their different use cases. And the work that they're doing inside the business day to day.

00;09;01;04 - 00;09;32;04
Brad Reynolds
Yeah, I've been thinking a lot about the shadow A.I. stuff, and one of the important points to kind of grasp from an organizational perspective is there are lots of AI options at the buffet and chat. GPT and open A.I. is one, but there are others. They're internal options. There are coding options, there's graphical options. And the notion of of you attract more with honey is kind of, I think, the strategy that you need.

00;09;32;04 - 00;09;59;28
Brad Reynolds
So the notion that you're going to want to use open A.I. for some things, great. How do you protect your company through to be kind of in line technically with the legal policy that you put in place? I think it's a a huge area and we've developed our own tools to be able to do that. But definitely something that everybody's got to put as the first step of what they do after they kind of define the high level stuff.

00;10;00;00 - 00;10;19;06
Brad Reynolds
So you have shadow or shadow AI implemented to remediate it a little bit. What are kind of the next steps? Like what's the next thing that you're thinking about Once you have the basic blocking and tackling done to give you an alternative to charge up?

00;10;19;09 - 00;10;38;17
Bryan Smith
So the next thing is really the enablement inside the business of helping people understand. And it's amazing. You can actually use some of the tools to help you with the enablement so that you can do a session where you provide access to the tooling for your users and then have them interact and ask questions about their role, their job, how they could benefit.

00;10;38;19 - 00;11;03;09
Bryan Smith
And it starts sparking ideas. So pretty quickly you want to get the adoption, so you're getting value out of that in that initial type of chat interface is beneficial, but it's not giving you access to your private data and incorporating your pieces in there. So I think the next real thing is as a company you have to agree on what are those strategic products that you want to go after and what are the things that can have the biggest benefit?

00;11;03;15 - 00;11;28;11
Bryan Smith
And you have to balance of the biggest benefit versus effort and determine what's the right ratio for where you should start. And but pretty quickly, there's some other things that that that pop up. Curious as you've done consulting and what are some of those key projects that you've seen or the process people go through to vet? What's a good place to start, a proof of concept or a project to bring to the surface?

00;11;28;13 - 00;11;54;01
Brad Reynolds
So that's a really interesting question because I do want to touch on the nuance of how we've done some things at Expedient, which differs from what I've done in the consulting side, on the consulting side of mostly found folks who have an idea of a use case, something to improve customer support quality, or improve a customer direct experience by accessing a database on the back end.

00;11;54;03 - 00;12;20;28
Brad Reynolds
But those were novel use cases. They didn't exist in the business before. We're going to use AI to maybe be transformative. And so that's what I've been doing and it accesses private data. It uses private models, all that kind of stuff. But I think folks are missing something if they go immediately to a use case that is intended to be transformative because you need to get the whole organization bought into II.

00;12;20;28 - 00;12;49;15
Brad Reynolds
So I was interested in what you said in terms of how can we use a AI to touch on objectives and key results that were already in flight? And so from a CEO or a board perspective, thinking about showing value as quickly as possible. I'm not saying that you shouldn't do proof of concepts that could be transformative. They definitely keep those in mind.

00;12;49;17 - 00;13;16;17
Brad Reynolds
But where your organization is going to unite around the AI, what it offers to a business is to see things that they've been working on for a quarter or a year suddenly get unlocked in terms of value. So I, I felt that to be probably the primary path you should think of in use case. And then secondarily, maybe next generation or transformative aspects of how you could use language models to do things.

00;13;16;17 - 00;13;39;13
Bryan Smith
Yeah, that aligns really well to what our approach was, because one of the concerns with AI, especially right now, is it's such an over discussed topic and it means so many different things to folks as well. So there's a risk of it becoming kind of the next shiny penny versus actually having real impact. So when we looked at our use cases, we started actually with a bunch of what if questions.

00;13;39;15 - 00;14;02;26
Bryan Smith
So we took the three biggest initiatives that we had going on inside the business and said, What if this was capable? What if, you know, what would the impact be if we could do this? And then we started asking, well, you know, what's the impediment? Why can't can that be done? And what are the challenges? And one of the common themes was certain pieces of information were accessible and ready to be used.

00;14;02;28 - 00;14;32;01
Bryan Smith
And then there were other things where the data really wasn't accessible and there was a lot of effort. So then we had to think about some of those different components to help prioritize one project to another so that we could have early quick wins, but for specific things that tied to the core existing objectives, because then the entire business could see that we were staying aligned with what the core goals were, but also the impact that I could have in those specific areas.

00;14;32;03 - 00;14;51;01
Brad Reynolds
Yeah, so I wanted to touch on a couple of things that I heard there. One is to implement these strategies. What structure have you put in place to go and execute on this inside of the business? You talked about things that align to a bunch of different OKRs that we have, but does that mean there's 50 people involved in kind of making those things happen?

00;14;51;01 - 00;14;57;01
Brad Reynolds
How did you figure out how to make that work within the structure of the existing entity?

00;14;57;03 - 00;15;26;26
Bryan Smith
Well, definitely using OKRs is a big part of it, so that there's clear alignment of what the goals are and when we're expecting to have different chunks of work being done. But then we really set up almost like a startup inside the business, you know, But instead of having people that were dedicated to that team, we took individuals from different teams and are hiring additional people into the business that will report onto those existing teams so we don't build something that's not sustainable and doesn't align with the rest of the way that we operate the business.

00;15;26;29 - 00;15;35;12
Bryan Smith
Plus, by bringing that expertise locally into the different teams, it helps really propagate the knowledge and share that to accelerate the overall learning.

00;15;35;14 - 00;16;04;01
Brad Reynolds
Yeah, I definitely think that interweaving the goals of AI with the organization responsibly, it takes a specific strategic plan, but that's where you get leverage and acceleration of the kind of strategy. The other thing that I heard you say about some of how we're looking to implement AI internally was about the data. Something about, hey, the data could be in the right place.

00;16;04;01 - 00;16;15;10
Brad Reynolds
And some of this data is not necessarily in the right place. And how is it labeled and how is it presented? How does the data story integrate into the AI story?

00;16;15;12 - 00;16;42;24
Bryan Smith
Yeah, that I'd almost say that the same thing, you know, the value of data is the value of AI and how you consume it and how you use it. But there were some really monumental learnings early on that were some aha moments that were surprising and some of those were almost the way that we think about data almost gets flipped and put on its head because historically you've thought of the most recent data as the most valuable.

00;16;42;24 - 00;17;10;04
Bryan Smith
So when you open an office program, it shows you here's your recent files that you're working on, or when you're doing backups, you keep your a certain volume in the storage and then it gets colder the older it is. And so historically, people and Experian also have stored data at different tiers to balance cost and retention report requirements, but also had different protocols that they use in different places.

00;17;10;07 - 00;17;30;12
Bryan Smith
So you may have one thing that's for an object or a different thing. That's for a Windows server. And we found very quickly that much of that data wasn't accessible to what we were trying to do from a AI, so we needed to look at the overall structure of that. But then also what's the appropriate data that we want to make accessible?

00;17;30;12 - 00;18;00;08
Bryan Smith
How do you put the data governance into around it so it only the right people have access to the right pieces of information. So there's a lot of time and effort to get all of your data ready. And that's a large portion of where you should focus early on is understanding where data is and what you want to use, how you would classify that, and then putting in the automated process to keep good health and work on the projects that you can access right now and then move to others as you make progress.

00;18;00;10 - 00;18;39;06
Brad Reynolds
Yeah, I see the conversation being a business conversation where it's we kind of understand what I can do and we want it to solve these business problems. Now, if we define that as the use case at the CEO level, board level, CIO level, how do we go accomplish that? Step one often once you've defined your use cases, okay, where is the internal data that we're going to need to consume and how do we move it, store it, present it in a way that the AI logic reasoning engine can kind of access to to drive some insights.

00;18;39;08 - 00;19;12;17
Brad Reynolds
But that brings up a nuance in terms of like one of our internal use cases, which is we have trouble ticketing systems that involve our own internal knowledge. They involve client knowledge, they can involve proprietary information related to clients. We have regulations and audits that we're we're subject to. How do you wrap your mind around all of the requirements with data so that we can use AI, but in a responsible fashion, given the the things that we've committed to our clients are our employees?

00;19;12;19 - 00;19;36;24
Bryan Smith
Yeah. One of the things that helped give comfort to both myself and the board when we're looking at that is, you know, how we can put those guardrails around those pieces of information. So we got pretty comfortable if we said that that data doesn't ever exit our current environments and the workload in the pieces sit directly adjacent to it versus streaming it out to an external public tool.

00;19;36;26 - 00;19;59;12
Bryan Smith
And so the way that we're really accomplishing that is there are certain things that we can do that may be a chat for common questions and things that would go and access a public tool and but then to have access to those groups of data. It's limited to a private model that we're running on infrastructure that's inside our data center that's directly adjacent and connected to those sources of data.

00;19;59;12 - 00;20;04;21
Bryan Smith
So that we have a lot more control over those capabilities.

00;20;04;23 - 00;20;25;00
Brad Reynolds
Can you touch a little bit more on the private air aspect? I've heard private come up a lot of also heard public in terms of the kind of clouds that you can access in the context of that use case or in general. Tell me about kind of like what the private infrastructure is and how accessible that is to a general company.

00;20;25;02 - 00;20;28;04
Brad Reynolds
Like any of the folks watching the webinar.

00;20;28;06 - 00;20;48;17
Bryan Smith
We may actually wanted to zoom out even further because I think a large portion of the conversations know probably 80% of what's going on right now. People are so focused on training and training their own model and so they think of that can't really be done internally because they can't do they can't get enough access to GPUs to build their own and build their own private.

00;20;48;17 - 00;21;11;26
Bryan Smith
So I think you need to think about that first. And one of the best analogies here, I would say, is when you think of the training process in the whole journey, that's kind of like the time in your life that you spent learning to walk. And most people, you know, in a period of, you know, 2 to 3 months from the first time they start standup, they're fully walking, but then they use that skill for the rest of your life.

00;21;11;28 - 00;21;32;18
Bryan Smith
And the training piece is likely going to be done in a public area like a public cloud or more likely even a specialized cloud. And for something that's built for that purpose. But most companies aren't can even do their own training. It would be like purchasing. If you're doing your own training, that's like going to and purchasing a car from an auto parts store.

00;21;32;19 - 00;21;52;18
Bryan Smith
You don't do that or it's pretty rare. And so think about that. You're going to be picking multiple models likely that fit your business, and then you can run those on your own private hardware. So it's a matter of actually having not one model that you're going to run, but you're likely in your business, going to run multiple different models for different use cases, for different departments.

00;21;52;20 - 00;22;04;03
Bryan Smith
And by having that specialization of the models, they each get smaller and smaller, the more specialized they are. You know, So that's part of the way that it can be cost effective to run that in a private environment.

00;22;04;06 - 00;22;28;18
Brad Reynolds
Yeah, I think there's like with all this stuff, there's just updates that happen on a daily, weekly basis and things that are a monumental change. About four months ago, Meta came out with an open source model that just thinking about it as a meta or mega piece of software that you could download and run on your own servers or a cloud set of servers, but it's dedicated to you.

00;22;28;20 - 00;23;05;21
Brad Reynolds
And that didn't exist before, not in any substantial size. So it's now thinking about it as an engineering concern of for those applications where I need to keep the boundaries very tight and under underneath my control, for we'll just say, first of all, from privacy, but also from what happens when an outside model gets updated and some business process flow that you were depending on suddenly changes doing a version control so you can put that under your control, log it and put all the normal kind of monitoring and capability effects around that model.

00;23;05;24 - 00;23;30;25
Brad Reynolds
But that's a novelty that didn't exist too long ago. That then gives you the private basis from where you can extend that. So you can take models like that or other models and customize them to your business. But like one example that really resonated with me at speed is we have relatively large source code library of all the different modules and applications that we've built.

00;23;30;27 - 00;24;02;11
Brad Reynolds
We're we want to take that foundational coding model and let it learn the way that we speak about code, because every company has their own language when it comes to marketing or code, we never upload that any sort of public model for many reasons, but be able to host that ourselves. We can actually modify that model to speak more like a really highly trained employee in there in our office for all intents and purposes.

00;24;02;11 - 00;24;37;03
Brad Reynolds
So I think that there's a revolution coming on. And you had mentioned training and the whole you spend a year of your life learning to walk, that's training. And then 80 years walking, that's what in the industry is a term called inference, right? But it just means running the model in your operation. And I think that there's a revolution coming in inference where you can use models that speak code, that speak, general purpose, that speak as well, that speak even things like accounting and all the different kind of nuances in language.

00;24;37;03 - 00;24;59;12
Brad Reynolds
And that's coming down the pike. So as you're a CEO or a board member or a CIO, the notion that you have the control now again to say, hey, these things, let's let's do that internally. It meets all of our rules and restrictions or for certain businesses that are highly regulated, let's do everything internally. We can't get even an iota of chance being out.

00;24;59;15 - 00;25;24;21
Brad Reynolds
And for other businesses, let's do some of it internally, but some of it we're okay using some of these best of breed image generators for marketing or for other internal operation or things. But it's kind of like the CIO now has the control over the levers and the CEO as a control over levers. But you have to have that strategy plan in place so that you realize where the levers are and which ones you're going to pull and what situations.

00;25;24;21 - 00;25;32;08
Brad Reynolds
So yeah, that's, I think, the kind of core in terms of what you can do privately versus publicly.

00;25;32;13 - 00;25;54;06
Bryan Smith
Yeah, I think of it very similar to cloud know early in cloud and people have this belief that everything is going to go into the hyperscale cloud and, you know, 15, 20 years into that, they realize that that's not really where things ended up and it's more like 30% there and then the rest in other private or types of cloud either on premises or through alternate service providers.

00;25;54;09 - 00;26;12;29
Bryan Smith
And the piece is likely very similar. There's going to be certain things that it makes total sense to run in of a public model. And then there's other things that you want inside your business where you're going to have those additional security layers where you want to be able to have your own change control from. When you move from model version one to model version two.

00;26;13;02 - 00;26;33;13
Bryan Smith
And those are things that may be more unique to your business. Those could be things that it's more because it's accessing your specific data that's sitting next to it and the speed that you need to do those connections and all those are different pieces that also build confidence in your team that you're going to have a good experience because they have more control over some of those components.

00;26;33;13 - 00;26;48;07
Bryan Smith
So the same reasons that people have made decisions to put different applications in different clouds, I think that same mentality works really well for, you know, why you would put certain things in different types of models or different types of our use cases.

00;26;48;09 - 00;27;13;22
Brad Reynolds
You're we've developed this strategy internally at Expedient and as you're looking out at 2024 and thinking about how did we set the stage, how did we provide an alternative to shadow AI, how did we get our data ready, and how are we thinking about the whole public and private that show us kind of the map for 2024 for expedient and how you're looking to drive value out of that?

00;27;13;25 - 00;27;41;08
Bryan Smith
Yeah, a lot of Q four this year is getting the additional foundational pieces and expanding the knowledge base inside the organization as a whole. And so there's certain areas like the protected chat or eliminating that shadow AI component is something that we have available to the employees. So that helps accelerate and also build confidence in using AI and so they can see how that it improves their efficiency.

00;27;41;11 - 00;28;11;03
Bryan Smith
And then as we are getting our data ready and AI ready in, in different areas, the first areas that we have will be injecting our own information and data into those private models that stay inside of expedient and those use cases start to get deployed at the end of this year. And then as the other components and data becomes accessible and sometimes we have to write new APIs, others is moving storage from older platforms into our current platforms.

00;28;11;06 - 00;28;28;20
Bryan Smith
Or each of those stages unlocks things, but each one is an incremental value that we're giving to the business along the way. It's not like a waterfall project where you have to wait till the end and you get everything all at once. At each one of these releases that we have, different teams and different groups are seeing a big difference in the work that they're doing.

00;28;28;22 - 00;28;53;21
Brad Reynolds
It's going to be an exciting ride in 2024 for us, but I think the part that I'm most excited about being in the AI bubble or the entry preneur I'll start up inside as expedient, is that I can see how the things that we're working on tie into the rest of the business. We do it through OKRs. Other people have different methods to do that, but you don't want it to happen in a bubble.

00;28;53;21 - 00;29;28;02
Brad Reynolds
You want folks to know that when they're doing these things, they're making a difference in terms of the overall operation, and then that makes you feel good. It's a great feedback loop. So thanks for giving us all of your insight and capabilities In terms of the AI plan and strategy for 2024 and in terms of the folks on the call, these are kind of the five concrete steps that you have to go through to be able to say, okay, I'm ready to go and implement a proof of concept that involves some sort of AI inference internally.

00;29;28;05 - 00;29;50;07
Brad Reynolds
You need to understand setting the stage. You need to understand where your data is, and then at that point, your use cases you can figure out, do I want to do things publicly or privately? But the kind of summary words that I'll leave you with is we can't be wringing our hands about AI. You need to be responsible about it.

00;29;50;07 - 00;30;10;01
Brad Reynolds
You need to put a policy in place. But in the words of Nike, just do it. You have to start doing things with AI, getting employees using it. And it's it's a little scary. There's just no two ways about it. There's all kinds of knowledge that none of us have a complete idea of, and we're going to be learning about it.

00;30;10;01 - 00;30;47;14
Brad Reynolds
But just doing it, taking a few steps forward, taking a few steps after that is the core to being able to realize the value out of your 2024 strategy. And as a summary will, we'll be sending out a success packet that covers a lot of detail in each of these areas. So for example, some policy examples, links to McKinsey reports on what you should be thinking about in generative AI, links to reports from like Harvard Business and KPMG about how the CEO and the board should be interacting as it relates to AI newsletters that you can subscribe to.

00;30;47;14 - 00;31;07;08
Brad Reynolds
But the reality is a lot of information that we've curated to say these are some things that if you read them, you'll probably be able to have a couple of valuable takeaways to use in your own 2020 for strategy and implementation. So thanks, Brian. I'm always a pleasure talking. And look forward to doing a great job in 2024.

00;31;07;10 - 00;31;08;13
Bryan Smith
I'm looking for the delivery of.