Count Me In®

Join host Adam Larson as he sits down with Michael Housman, the founder and CEO of AI-ccelerator, who shares his intriguing journey from academia to the dynamic realm of fintech. Michael’s passion for data and coding has taken him from healthcare to HR tech, e-commerce, and now the cutting-edge world of financial technology. They chat about the importance of flexibility in career paths, the transformative power of AI in credit decisions and underwriting, and the game-changers like Brex that are disrupting the fintech market.

Michael also dives into the balancing act between human intuition and machine algorithms in fraud detection, offering practical tips for finance and accounting organizations. Whether you’re curious about AI’s impact on jobs or looking for ways to innovate within your own company, this conversation is packed with insights and real-world examples. Tune in for a thought-provoking discussion that’s both approachable and enlightening!

Creators & Guests

Producer
Adam Larson
Producer and co-host of the Count Me In podcast
Guest
Michael Housman
Founder and CEO of AI-ccelerator

What is Count Me In®?

IMA® (Institute of Management Accountants) brings you the latest perspectives and learnings on all things affecting the accounting and finance world, as told by the experts working in the field and the thought leaders shaping the profession. Listen in to gain valuable insight and be included in the future of accounting and finance!

Adam Larson:

Welcome to Count Me In. I'm your host, Adam Larson. And today, we have a special guest with us, Michael Housman, founder and CEO of AIcellerator. Michael's journey is a fascinating one, transitioning from academia to the private sector, driven by his passion for data and coding. His career highlights include roles in health care, HR tech, ecommerce, and now Fintech.

Adam Larson:

In this episode, we'll explore how AI is revolutionizing Fintech, particularly in credit scoring and underwriting. We'll discuss how some companies are leveraging AI to serve niche markets, the importance of career flexibility, and how human intuition can collaborate with AI to combat fraud. We'll cover the challenges and best practices in AI adoption and internal AI policies. This episode has valuable insights that you won't wanna miss. Let's get started.

Adam Larson:

Well, Mike, I'm really excited to have you on the Count Me In podcast. And, you've got such an interesting background where you started with you had a PhD, and now you're leading a organization in Fintech. Maybe we can start by talking a little bit about your story and how you got from PhD to the Fintech space.

Michael Housman:

Yeah. It's it's it has been a very weird and winding road. I guess the background is I'm a data nerd, and I love working with data, building models, and and using code to do that. And so that was my PhD was was actually economics, specifically health economics. And then when I finished my degree, kinda had to make a decision.

Michael Housman:

Do I go in academia? And I thought, hey. This data stuff seems to be pretty valuable to start ups and companies, and I decided I didn't wanna just be publishing articles forever. And so I instead made a leap into private sector, which, you know, amongst academics, is a little looked down on, but I I kinda decided, hey. This is right for me.

Michael Housman:

And then from there, I kinda call it just, like, following the stepping stones. It started in a company that was measuring culture in hospitals, so it touched it scratched that health care itch and then over time shifted to HR technology, doing leading data science teams, data engineering teams, for, you know, developing hiring algorithms for, recruiters. And then it was always just a hop you know, and from there shifted to large language models in the context of ecommerce. And for me, it was always a what seems like it would be fun and interesting to do next. And and initially, it started in data and models, and then a lot somewhere along the way, I decided, listen, I was frustrated because we would build these models, and then we'd hand them off to engineers who would have to rebuild them almost from scratch to deploy them to production.

Michael Housman:

And I I kinda wondered, well, why do I need in someone else to do that? Like, why can't I just do it myself? And it is a hard skill set, but along the way, I learned from some very smart technical minds how to build production grade software and how to deploy to the cloud and then how to monitor it. And so shifted again from chief data science officer to chief technology officer. And once I and then finally, you know, for me, the application specifically, and I know we're gonna talk a lot about Fintech and InsurTech today, that was almost incidental to the skill set that I was applying.

Michael Housman:

Like, I like building models. I like deploying to technology, to production. But whether it was HR technology, whether it was health care, whether it was ecomm, whether it was Fintech, I always like learning a new subject matter and a new kinda domain. But the skill set that I had learned was always relevant and applicable. And so, yeah, that's kinda the you know, I look back the and I will say one more thing, which is I think I'm so grateful with where I've landed, which is I I get to, you know, I teach about artificial intelligence.

Michael Housman:

I travel the world. I get to help companies with their but there was no part of me 15 years ago that said I'm going to be an AI expert at all. Right? Like, no one you never think that in grad school. For me, I think the most interesting and I think the most productive careers are you know what?

Michael Housman:

I'm just gonna do. This thing seems really interesting to do next, and I'm gonna follow that stepping stone. And I've landed in, I think, a pretty in amazing place, and I pinch myself regularly that this this is what I get to do. So hopefully that hopefully that gives you some context where people are like, what's this this academic what's this health economics, you know, professor do teaching AI? It's like, I I don't know how that happened either.

Michael Housman:

So

Adam Larson:

But I think that's the beauty of the human experience is that there's no cookie cutter way that, that everybody kinda learns and everybody kinda grows, and our journeys are all different. So to hear your journey of how you kind of got there, you just kept going to the next thing that was interesting to you. And I think that's a good model for just anybody who is just starting their career or in the middle of their career saying, hey. This what I'm doing isn't interesting. Well, then find something that is interesting and follow that because it otherwise, I think we we can get easily bored, and that's how things get stagnant and we how we don't grow in our organizations.

Michael Housman:

Totally. Yeah. I I just think the like, when I've talked to friends and colleagues, there are some folks who said, I I wanna be a doctor or a lawyer or whatever, and I'm just gonna, you know, move ahead and make that happen. And that's that works for some people. But for me, I'm always fascinated with those who just kinda figure out what comes next.

Michael Housman:

And in many cases, steps along the way are not figuring out where you should be, but where you shouldn't be. There were jobs along the way. I was a consultant out of college because I just followed the hurt, and I hated it. And I was terrible at it. And, like, that's a that's good learning.

Michael Housman:

I mean, it's not it's a painful, you know, experience when you're when you're don't like the job. But at least you learn, okay. This is what I'm not meant to do. Great. I can check that box off.

Michael Housman:

Right? It's done.

Adam Larson:

Yeah. It's totally done. And okay. I learned that. I learned the lesson I need to learn.

Adam Larson:

Let's move on to the next thing. Otherwise, you know, it's 20 years later, and you're like, wait. Why am I still here?

Michael Housman:

Totally. Totally. Just being flexible, malleable, like, pivoting kinda like a start up. Like, you're looking for product market fit for you as an individual. Like, the skills you bring and, you know, what what does the market want from you.

Michael Housman:

Right?

Adam Larson:

Well and and thinking about the market, you know, we all have to be understand, like, the AI is here, and we need to understand it. And we need to be we need to understand how to apply it in our everyday lives. And in the especially in the Fintech space, it's like, how is you know, maybe we can talk a little bit about, like, the inclusion of AI and how is it changing things, and it's changing things very rapidly.

Michael Housman:

Yeah. 100%. I mean, I'd say, first and foremost, there's a, you know, HR sorry. Fintech is not dissimilar from HR technology in that you're making decisions about people. Right?

Michael Housman:

Whether someone is creditworthy, whether they should be issued a payday loan, whether they should be, you know, given credit, so on and so forth. And a lot of it comes down to what signals do we see about the individual that would indicate that they're, you know, credit or loan worthy or whatever it is and and so on and so forth. And what we learned in HR technology and what we've seen in financial technology as well is that those heuristics that have been used by banks and in many cases, credit scoring agencies are are off in a lot of different ways. And it's the same thing in hiring. Right?

Michael Housman:

When you interview recruiters, recruiters have these heuristics. I'm like, yeah. Don't choose job jumpers, people who jump from job to job. Don't you use someone who's who's unemployed for long bouts of time. We even dispel some notions about who you know, don't hire someone who has a a criminal background.

Michael Housman:

And so it's it's just interesting to me that, like, humans, we just we're all biased. There's a long line of research that says we're all biased, and we form these rules because we kinda don't wanna put the energy involved in thinking, and algorithms cut through that. And so I love the stories of Fintechs that I love are companies that come in and with a creative approach to underscoring and underwriting or sorry, great credit scoring and underwriting, completely upend the market and eat someone else's lunch for them. Right? And I'll give you 2, 2 clear examples.

Michael Housman:

Like, one is the story of Brex, which I tell often is fascinating, right, which was a couple startup founders that applied for a credit card. And I I don't remember what it was, but they they were denied a credit card. And these guys have good credit, and they had an interesting idea for a startup. And what they realized was that credit cards don't know how to underwrite risk involved in startups. Right?

Michael Housman:

You look at the and you on the face of it, a startup looks like a terrible risk because they just, like, swallows up capital, and it's cash flow negative for years years years until it's not.

Adam Larson:

Yeah.

Michael Housman:

And so and so they came in, and they said, we're just gonna build a credit card for startups. And they found a, obviously, a huge market because companies like, you know, Y Combinators of the world could give them access to entrepreneurs everywhere. And by just understanding that market, they were able to come in and kind of, you know, underwrite that risk properly, and they became a unicorn. It was, like, almost overnight. And, you know, I between us, I sat in.

Michael Housman:

I've done some strategy sessions and AI teaching to one of the biggest credit card companies in the world post Brex, and they were looking at each other saying, what how did this happen? Like, that should that that should be ours. How did we miss out on that opportunity? And, like, for me, I felt bad, and I wanna help them, but I wanna be like, hey. You guys really weren't thinking about this the right way.

Michael Housman:

And some folks came in with some really creative outside the box thinking. So, yeah, that's hopefully, that's one example of and I think that's any to your original question, where are the applications in Fintech? 1st and foremost is using the right data to make better decisions about how to how to think about risk, and issuing you know, giving access to financial products.

Adam Larson:

Now how do we avoid that that, the bias that you were mentioning? Because the the human side of things, we have all these different biases. Like, you know, we judge people by the worst day of their life, the worst decision they made, like, somebody who who is who is, like, as an example, somebody who, who who is in prison before. And we say, oh, you can't do this. You can't do this.

Adam Larson:

You can't do this. And we're we're we're not giving them opportunities. And and and I I always ask people when that when that subject come out, would you wanna be judged by the worst, like, the worst decision you ever made? Well, you know, like and not everybody's made that has that had the opportunity, but if people have made from pretty stupid decisions in their life and maybe they were never caught, maybe something never happened, you know, how do we get beyond those, you know, those biases that we who have created as humans? Because, you know, we're humans and, oh, I don't like those people because they wear red shirts and we wear blue shirts, We're gonna sit over here and the blue shirts and the red shirts are gonna be separated forever.

Adam Larson:

It's it's silly things like that that we as humans do.

Michael Housman:

Yeah. I mean, the the the answer is it's really hard. Right? These these biases get reinforced, and it becomes very hard to back out of them, and you can't just change your mind. That's just Yeah.

Michael Housman:

It's a really hard thing to do. So I think there's two answers to that. 1 is teaching people to use kinda more information. And and so that's where I you know, a big part of my career is it's not human or AI. It's human plus AI.

Michael Housman:

And that means, you know, recruiters sitting on the front lines or folks, you know, on the front lines of banks issuing loans, giving them information. You know, we would we would assess an applicant. We would then render a score green, yellow, red, and say, hey. Hire the hire or issue loans to the greens over the yellows and the reds. But even that is really hard because it can feel like undermining 20 years of you know, I've sat down with recruiters, and they say, listen, you know, egghead.

Michael Housman:

I've been doing this for 20 years. Like, you're no way your algorithm is smarter than me. Like, I've been doing this forever with all the PhDs that you have in the world. And it's really hard to convince people to use that data, but you you just try to give them the right education. You try to show them some of the data, and and slowly, you'll convince people.

Michael Housman:

And then the other thing I would say is sometimes it's not giving more information, but it's how you present it and even withholding information. And as an example, like, think about people I Airbnb had a practice for years. I was friends with their head of people analytics, and they had a really rigorous approach to recruiting. And one of the things they did, especially for developer jobs, is they just withheld the name on the resume. So you'd be screening an in you know, screening a resume, but you there were pieces of information you couldn't access, and you'd be amazed.

Michael Housman:

And what they found was that, women and minorities end up being hired with much higher throughput just by virtue of not being able to see names. And so it's like, look. We're we're biased in these unfortunate ways. And so sometimes let's just hire I love that because it's like, let's hire on merit. Like, let's just very you know, setting aside your feelings on DEI.

Michael Housman:

Like, no one could disagree that if if because of hiring purely on resume, you know, skilled knowledge, skills, and abilities, certain, you know, individuals, you know, women and minorities made it through that funnel more effectively. Like, that's a no brainer. Right? What so so, yeah, it's it's the the punchline is, like, it's really hard and and the behavioral economists like Danny Kahneman would have very strong opinions about this. But just little nudges like that withholding this information, providing that information can actually research has shown it does have an impact on our our behavior.

Adam Larson:

Yeah. It does have an impact, and you can't deny when you're shown the stats. Well, politicians will deny things till they're blue in the face even though if they have the stats, but that's a whole another conversation. Yeah. But when you give business people, like, true business people the stats and the numbers, they're like, oh, I think I do need to change.

Adam Larson:

And I think that's the biggest thing that AI is showing us that it's it's showing us the data that we're like, oh, wait. I have been doing this wrong. And I think it it it the next big thing is people admitting that they're wrong and then making the change, and I think that's the hardest thing for humans to do as well.

Michael Housman:

Yeah. I I totally agree. I think, I mean, you know, a lot of what I spend my time doing. It's one thing when you put Gen AI tools in people's hands, they are convinced like, oh, this save so you save me a day of my time and save me an hour. Like, no one's gonna fight that.

Michael Housman:

Right? Once you, you know, you still have to train them. But Mhmm. There's a different piece, which is what part of what AI does is decision making. Right?

Michael Housman:

That's all about machine learning and saying, hey. This tool can make better decisions because you can just access more and more data than you ever could. And that's much harder. Like, that cuts to the core of what we are because we are intelligent beings and Mhmm. This brain is what separates us from all other mammals on earth.

Michael Housman:

And so it's really hard to then say, oh, you know, there's there's these algorithms that kind of, you know, are getting smarter than us and are eventually going to be out there and they will be smarter and conscious and that's another conversation. But, yeah, it's hard. It's it's hard. The the people element I'll I'll say one more thing, which is, like, between the technology and the people, the technology is easier to build. It's people.

Michael Housman:

It's changing behaviors. It's teaching. That's always the the harder piece.

Adam Larson:

Yeah. It really is. That is the harder piece because people just they don't want to change. I mean, you can see the whole library of literature about change management, and that was even before AI was even part of the conversation. And it's it's not an easy thing to change your way of doing things.

Adam Larson:

And so being able to adjust your thinking and say, hey, this is gonna help me, you know, like you're like like, you can see that the like, using generative AI in an everyday application, you know, even like things as simple as, like, I can load my audio from a podcast and I can get a whole anal an analysis of everything we talked about, suggested titles. Like, there's so many it saves me so much time when I'm doing something as simple as this. But in larger applications, you still need the human, like, the human plus the the human plus the artificial to make those very strategic decisions because we know I mean, obviously, there's whole a wealth of literature out there of people imagining what would happen if the AI took over. And I think we still need to have that that human aspect of being able to still make those those decisions because there's things that the AI can't take into consideration, the human aspect of it.

Michael Housman:

Yeah. 100%. Absolutely. And, yeah, I've I've heard, like, AI generated podcasts and, like, the the pure AI. So, look, here's the good news, Adam, is you're not out of a job anytime soon because I've I've I've heard them with the AI generated voices, and they're not really engaging.

Michael Housman:

They just don't hit the nerve. But I would argue that, like, you know, podcasts are doing research with AI tools, knowing what questions to ask, maybe being prompted. Like, that I I I haven't seen the state of the art, but I think that is kind of a is is where it's headed. Right? It's, both both harnessing the power of that technology.

Adam Larson:

Yeah. It saves you time. Now when we're talking about Fintech and AI, you know, one of the biggest thing one of the biggest spaces that, people are talking about is stuff like fraud detection. You know, have you had any experience in that? Can we talk a little about some examples you've seen in in that space?

Michael Housman:

Yeah. I I did. I spent a couple years, doing fraud detection for large banks, basically, that were issuing loans. And I will say overall, it's such a cool space just because it's like cops and robbers with data. So you're, like, fighting the good fight.

Michael Housman:

There are fraudsters out there that are looking to take advantage of not just banks, but because they're doing what they're doing, banks are then you know, they add stipulations to app. Like, we submit a loan application and banks kinda sniff that a lot more closely because of fraudsters. And, and so yeah. And and for me, it was the punchline was it wasn't it was human plus machine, which is the same theme, and I'll give you the example, which is you if fraudsters would find a new attack vector, right, they'd find some new way of perpetrating fraud. Oh, guess what?

Michael Housman:

They don't check your employer, so we can create a fake employer, and everyone starts using that fake employer. It's like I think it was, like, Bob's Big Tires was, like, one that we found. And everyone started using that one, and all of a sudden, we found a big influx of fraud applications. The algorithms wouldn't find that sight unseen. Like, there's just you need a human intuition.

Michael Housman:

Like, we'd start looking for fraud rings, and we'd see patterns in the data, and we'd see people trading information. And then all of a sudden, you'd see that employer, and you say, hey. Let's dig into that one specifically. You'd have analysts that were trained, and they'd find, oh, yeah. Yeah.

Michael Housman:

Yeah. That's that's a new that's a new thing that they've discovered. And then they would find it, but then you need to render these decisions within a sub second response time. So they would take that information. You have a a fraud team, and they would take that information to my team, which is technology, and we'd very quickly have to build rules and deploy them to stop that fraud from occurring.

Michael Housman:

And so it was a really it's a fun partnership to have folks in the data, unearthing interesting trends, and then sharing it with the data science and technology teams so they could plug it in and kind of and plug the holes. And frankly, that's how every company doing fraud functions, which is you have the human analysts that are looking they're just constantly looking for for bad behavior, And then you have the machine element because they're the ones that, you know, have to make a decision very, very quickly on an application. So that was for me, the one of the the coolest things. And the other the other thing I'll say is, I went at that job. I was very naive when I started that job.

Michael Housman:

And so because I you know, I've never done this stuff. Right? I've never been on the dark web before. And so Mhmm. We had a chief fraud officer who his job is to understand how criminals think.

Adam Larson:

Wow.

Michael Housman:

And so I remember early in the conversation, he, early in my time there, he kinda came to me and was like, oh, yeah. We need to be you know, there are certain there's a blacklist of Social Security numbers that are never you know, we just know are, like, fraudulent. And I kinda looked at him and I was like, wait. You you can buy a Social Security number online? And he looked, yeah, he looked at me like, oh, sweet sweet child of mine.

Michael Housman:

Like, you don't understand. Like, you have no idea what folks are trading in the in those black markets. So, you know, lesson lesson learned. Like, you need to understand the terrain, and you need to spend the time kind of you know, I don't know if I went on the dark web, but I definitely picked his brain a lot around what people are doing to to defraud folks like you and me.

Adam Larson:

Yeah. Well, and there's so many new schemes. And and for every, blocker that you put up, I'm sure that the fraudsters come up with 3 more. So it's that that ability to be agile is is more important than ever within your organization.

Michael Housman:

Yeah. Yeah. 100%. You just you needed to move very quickly to catch that. And that's even you know, that was those were certain types of loans.

Michael Housman:

Credit cards, they're even more nimble. Right? There's just a zillion different ways that's so hard to do. So but, yeah, it was it was it was fun that, like, when you're working on projects like that where you're building technology, but you know it's helping not just banks, but folks that are applying for loans. I felt I felt really good about kind of the work we did there.

Adam Larson:

So what should, finance and and accounting organizations, what should they be doing to help protect themselves? Because, you know, if you're if you're a a a team and you're receiving payments for services or for goods or whatever, you know, how can how what steps can you take to help prevent those frauds from happening? You know, obviously, you can partner with organizations to help catch those things, but are there certain steps that you should be taking to help prevent that within your organization?

Michael Housman:

Yeah. I mean, I think at a minimum, you need to have designed algorithms and technologies to especially be looking at fraud. Like, companies like Stripe, you know, I have a friend of mine who does fraud detection at Stripe, and they just have armies of data scientists that are constantly on the lookout for fraudulent behavior. Now that said, it's hard if you're a smaller company, you don't have the resources of a Stripe or a Kabbage or any of the big fintechs, you know, you can you can start to build the capabilities. Or there the good news is there are plenty of algorithms and SaaS software tools that will help you buy it, right, and in source it or kind of outsource it.

Michael Housman:

And so that's I think I was talking yesterday with a very, very, very early Fintech, and they they will have a a novel approach to lending. And it's something they needed to think about. They're on kinda day 0. They're trying to launch a product, and I was like, there are gonna be I know you have the best of intentions, but there are gonna be folks out there looking to steal money from you. So you can't punt on that.

Michael Housman:

You need to at least give some thought to who can help. And the good like I said, the good news is, like, you know, the work we did at point predictive, you you leverage the power of the network. And by that, I mean, they have so many clients. Like, if you are Visa or AmEx or Mastercard, you're looking at one slice of the pie, and there are companies that work across them to say, listen. We know what the network looks like.

Michael Housman:

So you're not looking at one piece of it. You're looking at all these transactions. And so, but, yeah, the point is you you need to be thinking about this stuff early on if you're in the business of, you know, any sort of risk adjusting, issuing insurance, same deal. Insurance fraud is a thing. Like, any fintech, any insurtech needs to be thinking about it from day 0.

Adam Larson:

Yeah. So what about, just things like internal AI policies? You know, what is your what has been your experience of that? And are there some best practices you can share with the audience about, you know, having your internal people starting to use these, you know, because generative AI, like, I can just go to Google right now, get on Gemini and say, hey, Like, analyze this for me. Not that I should be putting, you know, our, you know, internal company documents in the generative AI, but, like, what are some internal policies that you've seen that have worked well?

Michael Housman:

Yeah. I mean, companies that I've spoken to that seem to have a really good handle on this, I they kinda take 2 different approaches. And one is they call, like, top down and bottom up. And the top down is the traditional listen. We're gonna identify well, let me back up.

Michael Housman:

The bottom up is we're gonna empower employees. They're gonna get a certain AI budget that they can that's like credit card, like small ish budget. Encourage them to experiment, recognizing that, hey. We need to have certain rules around what data can be disclosed. You need to be really buttoned up if you're health care or financial application.

Michael Housman:

But allowing them, especially your knowledge workers, to look for tools that could enhance their productivity and saying, hey. We wanna empower you. Like, go find something. If you find it, here's a Slack thread. Send it out to everyone and let people know that you found something that's, you know, making you 5, 10, 20% more productive.

Michael Housman:

So that that's the bottoms up. And then for the top down, there are applications and tools you can't just, you know, implement on a 1 by you know, one off basis. You can't use a credit card to pay for them. And so you have to design pilots. But I I think the best companies are have almost like an AI steering committee.

Michael Housman:

And those that committee has a certain budget, and they have certain, objectives that have been set out. And they're going from one application to another, designing POCs, testing them out, and then assessing, is this successful or not? And so the the most the most successful companies I'm seeing are taking both of those approaches. And, you know, and they're seeing some some real clear wins. So I know there's a right now, there's a question of, like, where's all this investment going and where's the the benefit?

Michael Housman:

And I I I do know companies that are benefiting, but it's you know, it has to be done in a really intentional and strategic way.

Adam Larson:

It it really does because it it it's very easy to get in there and just have at it, but you need to you know, if you don't understand about hallucinations, if you don't understand about the limitations of the current systems, you could be potentially be putting things out there that aren't true, things that are false, things that don't connect with what you're building. But the if you understand those things, the capabilities are pretty much endless. I mean, it it it helps us do our jobs in so many different ways that I don't think people could completely realize the capabilities that are out there for them.

Michael Housman:

Yeah. 100%. You know, I just got, talking about a very large pharma right now, and we're totally revamping the way they do market research because it's very Mhmm. Manual, and they use Google, and they use these, like, kind of now antiquated tools. And we're showing them, listen.

Michael Housman:

You know, when you're doing research, there are some AI native tools like Perplexity, and Gemini. When you're doing writing in synthesis, there are other tools like Claude and ChatGPT. And we're teaching their people how to do it. And the truth is, like, there's such a boost in what they can do in their productivity when they're harnessing the power. But they have to understand what are the trade offs, what are the things you have to be worried about, what information should you not send to an LLM.

Michael Housman:

So, yeah, there's, like, there's a big training component. Like, how Yeah. A small part of the project was figuring out their workflow and then figuring out the

Adam Larson:

tools.

Michael Housman:

The bigger part is, let's put it in people's hands, retrain them, make sure they're following it.

Adam Larson:

So, you know, speaking of that example you just said, what are some red flags that people can look for when they're kind of working with their employees and doing something better? The you know, we've already talked about the biases that you may need to look for and stuff like that. But what are other red flags we should be looking for when you're when you're utilizing different AI systems within organizations?

Michael Housman:

Yeah. I mean, I think I would be trying to under there are a lot of wrapper tools. Well, I mean, first off, there's kind of the typical due diligence that comes with SaaS tools, which is make sure you're cognizant of kinda terms and conditions and what data could be shared with whom. Yeah. Right?

Michael Housman:

There are a lot of wrappers out there, and so I get intimately familiar with what are they actually building in the way of kind of, you know, novel IP, like, in the way of LLMs. How much of that is just a wrapper on top of an API that's been published via chat gpt or Claude or whatever the case may be. Yeah. And then, yeah, I would also just experiment with kinda what the what how good are the outputs you get out of these tools. And so just being mindful of that.

Michael Housman:

And then also developing some robust internal policies around data privacy and security. And I think there are horror stories out there of coders that shared proprietary code with chat gbt, and, you know, I think Samsung had a black eye because of this, you know, a number of months ago. Yeah. Some code and some data got kind of, you know, leaked. So I just think you need to you need to do your proper, like, due diligence.

Michael Housman:

Mhmm. Be really buttoned up about that, but then also make sure that whatever employees are experimenting with these tools, they they kinda understand the rules of engagement. Right? And they're they're cognizant of, like, what is on and off limits when it's talking to an LLM. Yeah.

Michael Housman:

Those are kinda those are the big ones. And the last I would the last piece I would say is, you know, what's your margin for error? This is where I'm spending a lot of time these days is thinking about, like, okay, functions like tax and audit, even lawyering. Like, you have a very small margin for error. LLMs are amazing, but you can't you can't afford for them to make things up.

Michael Housman:

Right? Hallucinations and certain things, you can't just make up your audit. You can't just make up your tax return. So you try to figure out, okay. Like, how can we build processes where there's an LLM, it's supplemented by humans, where you're still faster, but you're verifying what's coming out of the LLM.

Michael Housman:

So that's what we're spending a lot of time doing now.

Adam Larson:

I yeah. I think that's that's a huge that's a huge market right now is is spending the time verifying what's coming out, make understanding that you're doing it right. And then I think that's where a lot of the rappers come in where, hey, we're going to take this and we're going to make sure that it makes sense for you. But again, you have to understand where it's coming from. Like, hey.

Adam Larson:

I'm gonna use this system and realizing, wait. It's just chat dbt. So I could just pay chat dbt to do it myself.

Michael Housman:

Yeah.

Adam Larson:

But then what what are what other benefits are coming from that vendor if they're utilizing it?

Michael Housman:

Yeah. 100%. That's that's I think folks are there are these point applications that are useful, but I think folks need to peel the onion a little bit and understand, you know, what what additional value are they creating. Are they fine tuning this model, or are they kinda just spitting, you know, spitting out this generic API response?

Adam Larson:

Yeah. Definitely. So, you know, as we kinda wrap up the conversation, what what are what what are we looking at toward the future as we look forward as you look forward, you know, with all the, you know, the organizations you deal with and what you're seeing day to day, what what what's coming up around the corner that we should be aware of and we should be mindful of?

Michael Housman:

Yeah. I mean, I think couple things. Like, number 1, we're we're now seeing the AI backlash. It was inevitable. Mhmm.

Michael Housman:

You're starting seeing reports. We we poured 600,000,000,000 in venture capital funding towards AI because everyone were we tend to be very you know, folks tend to jump on bandwagons, and it was like Yeah. It was crypto and web 3, 3 to 5 years ago, and then everyone's like, oh my god. AI is gonna so, you know, so I it was inevitable that folks were gonna start to say, hey. Wait a second.

Michael Housman:

What do we what is this buying us? And I and I don't think and that's fine if there's a little bit of skepticism now. I think I see a ton of enterprise value being built, and it's by companies that are thoughtful and deliberate and saying, listen. We have this complex process. We're gonna pull it apart.

Michael Housman:

We're gonna find where are the low hanging fruit in terms of adding in ML and AI to automate and augment our our people. And I'm already seeing companies improve their processes and, like, sprint ahead of their competition by doing that. So I think that's in the short term, you're gonna see a ton of value created, but it's it's not by the, like, toys and the magic tricks that you you see online. It's by companies that are really thoughtful that are saying, hey. We're gonna go on this multiyear journey together.

Michael Housman:

And I think where that leads is I tell everyone these these technologies scale intelligence, that means knowledge workers, frankly, like you and me, Adam, are kinda at risk. And I don't think it takes our jobs anytime soon, but it is going to change our jobs. Mhmm. And I think companies with a lot of knowledge workers need to be thinking about this now, and starting to, like, head down this AI, you know, journey, like, building. And this is I I said a lot of my time building AI road maps for companies that employ a lot of knowledge workers because they see the writing on the wall, and they recognize that if your if your business model is and this by the way, I accelerate my firm, like, we we bill we rent out developers and data scientists and engineers.

Michael Housman:

Like, that's our model. If that's your job to rent out talent, you know, you're you're you gotta be able to adapt. And so I'd see companies take hopefully, this accept the winning companies are taking that seriously that see this as a threat and are trying to think about how to disrupt their own businesses before they get disrupted. And so that's, you know, that that's where I see it headed. And then we you know, it's probably a separate podcast over drinks where we talk about whether they'll become self aware, you know, conscious.

Michael Housman:

We'll take all our jobs and our lives. But, yeah, I'm seeing a lot of, like it's it's an exciting time because companies are the companies I'm working with are really digging in and saying, okay. This is a huge opportunity, and I'm starting to see them really realize that and sprinting ahead of the competition because, you know, the competition is taking a wait and see approach.

Adam Larson:

Yeah. Well, so I appreciate the that that advice, and I hope everybody takes that to heart. Be agile. Be ready for what's coming, because you don't know what's coming, and you gotta just be ready because things are gonna be changing very rapidly. And don't get caught on the back of your heels, basically.

Michael Housman:

Mhmm. Totally agree. Love it.

Adam Larson:

Yeah. Well, Mike, thank you so much for coming on the podcast. I really appreciate you sharing everything with the audience, and and your insights is just so invaluable. We really appreciate it.

Michael Housman:

Awesome. Thanks so much, Adam.

Announcer:

This has been Count Me In, IMA's podcast providing you with the latest perspectives of thought leaders from the accounting and finance profession. If you like what you heard and you'd like to be counted in for more relevant accounting and finance education, visit IMA's website at www.ima net.org.