This is the show where we go deeper than the hype. Where we go beyond just the prompt. On the podcast, we talk with product, engineering, and GTM leaders who are building AI-native products and using AI to supercharge how their teams operate.
If you’re looking to scale your business with AI or want to learn from those doing it at the frontier, then you’re in the right place.
Sani Djaya (00:53)
Derek, thank you so much for joining me. I'd love to just for you to start with sharing a little bit about Socratic's AI, what problem it solves and why you're excited about the space.
Derek Boman (01:05)
Yes, honey. Thanks for having me. ⁓ Grateful to be here on the podcast. So Socratic's AI is a platform that is designed for investment bankers and middle market transaction advisors in the &A space doing complex valuations on private companies. And so our product can take in a set of historical financials.
for an &A target on the buy side or sell side and then help project out future forecasts and eliminate a lot of the kind tedious grind work that an analyst would do putting together those figures.
Sani Djaya (01:46)
Cool. So tell me a little bit more about that. Like how are you applying AI in that space? Yeah, I'm not super familiar with like the what an analyst does in M &A.
Derek Boman (02:00)
Yeah, so today what an analyst or an associate or a lot of times a junior level person would do in this space is when a...
entrepreneur or a company management wants to sell their company, they engage with who would be one of our clients, a transaction broker to help put together that deal before they go shop it out to market to potential buyers. And today what they would do is they would ask that seller, hey, give us a set of your historical financial statements.
For a large company, they may have sophisticated accounting and finance, but for a smaller company or in kind of the private company mid-market, middle to lower market, their financials oftentimes are quite messy. They're not up to the standard of...
like what a public company or a larger company that follows US GAAP would be. And so the first thing an analyst would do when they get those financials, which first of all, they get them however they get them. They can come in a variety of different formats. It could be an Excel document. It could be a PDF, sometimes even like a screenshot or like an image. uh,
So they have to get that into a state where then they can do Excel modeling on it, which involves a lot of like either copy paste work or a lot of data cleanup in order to even get it to the point where they can start building out formulas and forecasts and looking at different things. And what our software does is we can take in that messy unstructured data format, parse the documents, understand what are the different time periods, the line items.
classify it, cleanse the data and structure it to make it look more like a larger, more public company. And once those financials are standardized, build out a forecast model, build out other types of analysis, be able to chat with the financials and look for anomalies and all kinds of other analysis that you can do once you have the data clean and structured.
Sani Djaya (04:08)
Yeah, are they giving like bank statements as well? Like PDF of like a bank statement or is it like, yeah, what kind of documents are they receiving?
Derek Boman (04:18)
Sometimes
that happens at the moment where R-Sharp is focused on the just taking raw financial statements out of QuickBooks or out of Xero or some other accounting system or that their accountant or their bookkeeper might send over to them. there can be sometimes there's like 20 lines, sometimes there's 150 lines and.
they can be all kind of over the place, very bespoke to that particular business. And we use a combination of some pattern matching, some rules, some large language models to try and interpret that and kind of standardize it down to a templatized or normalized set of financial accounting.
Sani Djaya (05:02)
Got it, got it. ⁓ Anything that you're able to share around how you're taking that messy data and being able to normalize it and make it into a nice model that they can play with as opposed to like a human or an analyst or an associate doing all that manual work?
Derek Boman (05:21)
Yeah, well, there's a lot of different technologies under the hood and I won't get into all of them, but there's, we use a combination of things. This isn't something where you can just like throw into chat GPT and get an output. First of all, you wouldn't want to do that anyway because of the proprietary confidential nature of these kinds of private transactions. And so first of all, it needs to be very, very secure. ⁓ second, it needs to, it's a combination of
large language models when you need to understand the nuances of the language and what those particular line items might be referring to. But a lot of times there are certain patterns and things that you can recognize and build different rules or algorithms or things around to be able to make sense of that. But do it in a way that is still much more efficient than someone else having to try and reason through that or go back to a management team and say, well, what is this? ⁓
You may have to do that anyway, but if we can eliminate the majority of the work and you only have to do it on the margins to go back and say, hey, one to four out of a hundred lines need some additional input or need like a human review, then ideally we've cut out a lot of tedious grunt work on the front end.
Sani Djaya (06:37)
Do you have like, you know, when you're looking at like ⁓ a problem with like trying to normalize this data, do you have like a sense or like heuristic to think about like, in this case, we use an LLM. In a different case, we should use like an algorithm or like in this case, we should use something else. Do you have like a sense of that? I don't have any experience in this. So I'm just super curious here.
Derek Boman (07:02)
Yeah,
I like to think about it of like, what are the, when an input goes in there, let's think about things like latency. Let's think about things like cost. And if something says, if the line says gross profit, and one of our categories is gross profit, then that's an easy match.
And there's no need to make a call out to a large language model where there's gonna be latency when you've got to send those tokens It's got to have context. It's got to try and reason through and understand The user doesn't want to wait for things that you could easily know otherwise and so there's a lot of different Nuance and technique that goes into that but ultimately it's for optimizing around Cost and and processing time
Sani Djaya (07:53)
Got it. Okay. When how I mean, I've definitely talked to a couple of folks, like, for example, one in the in the legal space, and they're doing like, interesting, LM things on the legal space. One of the things that I've definitely heard about is like,
the ability and accuracy of sending PDFs to an LLM. And I have heard that they're like, we should have just used OCR, like Amazon's or Microsoft's OCR. Have you seen a similar thing as well? I haven't spoken to them in a while, so I don't know if the latest models are even better at extracting information out of a PDF.
Derek Boman (08:32)
Yeah,
so PDFs are interesting. They're like semi-structured where you can kind of extract text out of them.
in ⁓ a fairly decent way. But in our case, when we're dealing with tabular data, meaning it's like there's columns and rows, and if you just try and copy paste that out of a PDF, even if it's got live kind of indexable text, a lot of times the structure of the table gets totally disorganized. And so, ⁓
like are there OCR technologies that can extract all the information? Yeah, but sometimes they get all jumbled. And if it gets jumbled, or if a column gets off by one, then you've screwed up the entire thing, right? So the integrity of that table needs to be maintained, ⁓ which can be difficult for like an OCR if there's like not.
Sani Djaya (09:17)
Yeah, All the data is good. Yeah, yeah.
Derek Boman (09:32)
specific column rows or grid lines. Or if the table jumps from one page onto the next page, but they need to be merged. So there's a lot of complexity in doing this with tabular data. And there's different techniques that can be used there ⁓ in order to maintain the column and row relationship, which you absolutely have to have. If it gets jumbled, then game over.
Sani Djaya (09:34)
Yeah. Yeah. Yeah. Not always true for financial statements. Yeah.
Yeah.
Derek Boman (10:01)
So there are models themselves have gotten much, much better at interpreting those types of things. It's a similar type of analysis that I would do there, as I was saying before, of what is the trade-offs of doing it one technique versus another technique in terms of latency, in terms of cost, in terms of accuracy. All of those need to be taken into consideration in order to choose which method should we use given this particular type of input.
Sani Djaya (10:21)
you
Yeah, yeah. I'm actually, pulled up HubSpot's K1 this past, was like 2024. Just to highlight what we mean by like, hey, when you feed this to an OCR, do they know this is the same row? Right, and the accuracy of that. Yeah.
Derek Boman (10:45)
Yeah, or that dollar sign that's kind of floating out there. Is it gonna
think that that's its own column?
Sani Djaya (10:51)
Yeah, yeah, yeah. And so like, it seems like it's a not fully solved problem is what it sounds like.
Derek Boman (10:59)
Well,
you would think that it would be relatively straightforward, but yeah, it is not, right? Like these things are very tricky and parsing financial data ⁓ is really, really important for public company analysis. But that, I mean, that's structured for like a K1 or whatever that was.
Sani Djaya (11:05)
Got it.
Derek Boman (11:20)
But think about on a private market side where they, you know, it hasn't went through all the rigor to report it to the SEC and go through all those standards bodies to report it in a very consistent way that in the private company, it's a bit more like the Wild West. And so it's even a trickier problem than what you just described.
Sani Djaya (11:23)
Yeah.
Yeah.
Yeah.
Got it. So, I mean, we talked a lot about the grabbing the data and normalizing it. Are you doing anything after it's been normalized and like, ⁓ yeah, any kind of features there that you want to highlight?
Derek Boman (11:55)
Yeah, I mean, once it's normalized, then you can perform all kinds of mathematical transformations or other types of analysis on it. You can derive key metrics really quickly and derive like a statement of cashflow and try and get down to, what is the, what is the cash view of this business? What does it look like moving forward into the future and be able to iterate on it, the different time periods and, and, build out different scenarios.
and things. One thing we're really excited about is the then the ability for a.
large language model to look at that data and then spot certain anomalies like, this period is an outlier. Maybe go back to the management team and ask them about this or why, why did this particular line item spike in that year? Is that an unusual item that given a, like in an and a transaction that you might want to extract or they call it like add back that particular expense because it was like a one-time thing or it was COVID or
Sani Djaya (12:36)
Yeah.
Yeah, yeah.
Derek Boman (13:00)
there is some kind of anomaly where if you're looking at large amounts of certain things can go unnoticed.
I mean, you've got to look really carefully, which an analyst would do, but it's just very time consuming, where if we can spot those things and call attention to them very quickly or say, hey, this, this line was really messy on the input and we just need to flip the sign. And then you just accept those changes. And so it can spot errors and then self correct, or just ask the user if they want to confirm it and then correct things really, really quickly. So basically go through the thought process and the work
flow that one of those workers would do, but just in kind an accelerated fashion.
Sani Djaya (13:46)
Yeah, so you mentioned a couple of interesting things. You mentioned one thing, which was, you know, it sounds like being able to chat with the data itself, right? But I'm also thinking about like, you know, there's a lot of like information for analysts than these &A folks. They have like general ideas around typically around like
make sure to like look at this information, like look closely here, like this should be the ratio of this thing, like that kind of stuff. It sounds like there's a mixture of like allowing the LLM to kind of reform freeform, like come up with their own analysis. But I assume there's kind of like some rules based things that you're like, feeding to the LLM in a kind of kind of like workflow or deterministic way.
to then highlight it for in any way in the UI. it's like maybe just like in the UI or like when the user chats with LLM as well. Does that sound right?
Derek Boman (14:45)
Yeah, the way we think about it is like, there are certain common tasks or certain like gotchas that someone that works in this field would be trained to look for of like, hey,
Sani Djaya (14:54)
Yeah.
Derek Boman (14:56)
this particular thing always go there because that's where a lot of like either accounting shenanigans happen or like in a small business an owner might be ⁓ putting some personal expense in there for tax reasons into a business where if you're the buyer you're not necessarily going you're gonna discount some of that stuff back and So knowing where those kind of gotcha things are and then building prompts around those or building kind of specialized agents that go look for those particular
things like we just released one this week where if there's a line item that's off that causes a balance sheet to not balance or an income statement to not balance from the input to the output that we hunt for the error and say this is where this is where we think that it is so it's it's giving particular
agents if you want to call it that or particular AI prompts to look for very specific narrow things and then our system becomes kind of the orchestrator of those those different narrow tasks to build this kind of Workflow management where you can kind of do the end-to-end Process because all of these like narrow tasks or all of these like little gotchas are looked for and ⁓ corrected
Sani Djaya (16:15)
Gotcha. Yeah. And I assume you're like, ⁓ team probably has some expertise in this. ⁓ And then I assume you're also talking to the users to kind of like learn from those gotchas to then apply it in your product as well.
Derek Boman (16:30)
Yeah, we're lucky enough that we have a lot of people on our team that have worked in this space for many decades or done some pretty complex deals, as well as a great...
a set of strategic design partners that have come on early to our product and that have shared ⁓ data confidentially and privately, of course, that we can learn from and kind of see the variability that's out there as well as kind of pick their brain on, like, what would you look for in this particular case or what would you expect the system to be able to find?
Sani Djaya (17:05)
Yeah, yeah, that's awesome. I'm curious if you're doing anything with any of the reasoning models.
Derek Boman (17:16)
Yeah, for sure. The way we think about it is that models are
good at different things and so use different models for what they're good at. Some are good at parsing PDFs with their vision capabilities or their multimodal capabilities. Others, you just need a quick answer or it's good for retrieval. So a smaller parameter model may be good just to, someone asks, hey, what was net income in year three? That's just a retrieval question and you can retrieve it, answer it, and so you don't need a big reasoning model for something like that.
that
now for like the types of error hunting or ⁓ analysis that I was just describing of, hey, look, take, this data set and look for something that is off. Now it has to like being able to reason on that, the type of thinking chain of thought models are, we found help that obviously you pay a price for the latency that it takes a while to do that type of analysis, but
it's if it spots something that would have taken you, you know, tens of minutes or hour to find that particular error, or it might have gone unnoticed, then that trade off is worth that type of hunting and thinking. And particularly if you can do it in parallel where while that analysis is running, you can go do other tasks or the user can be engaged in some other
part of the interface while an analysis runs and then tells you like, I'm done and here's what I found.
Sani Djaya (18:56)
Yeah. Yeah. ⁓ I remember, so it's funny cause open AI and everybody kind of views open AI is the first release of like deep research, right? ⁓ but Gemini actually with Google actually released deep research first. I remember first using Gemini deep research. I would just give it a task and be like, it'll take 10 minutes. And would just like walk away and like to go do something else. And then I would come back and be like, it's done. This is great research. Right. And like, I was, I was just so delighted that I could just like walk away and like, go do something else.
Especially if it's something &A related where it could you going through with the deal and missing something could end up being like tens of millions of impact of dollars, right?
Derek Boman (19:35)
Yeah. And
so today, that accuracy is, is, mean, the accuracy is really, really important. So today, I mean, you hear all these stories about an investment banking analyst working 80, a hundred hour weeks. They're up till late hours of the morning because they're going cell by cell, formula by formula, just making sure that everything is like watertight where
Sani Djaya (19:49)
Yeah.
Yeah, yeah, yeah.
Derek Boman (19:59)
which your brain is probably not at full capacity at that hour of the morning to do that type of deep thinking where we, you know, if you could set an AI that doesn't get tired, that doesn't like have those types of physical constraints to say, Hey, you know, go just make sure that this is sound or defensible or that there's no errors, like mechanical errors in this thing. And you can either step away from it or
Sani Djaya (20:22)
Yeah.
Derek Boman (20:28)
work on another deal until that finishes and then come back. Like we see that as kind of fundamentally different than the tedious time consuming cumbersome way that that is done today.
Sani Djaya (20:41)
Yep. Yeah. Yeah. Yeah. ⁓ yeah. So it sounds like super impactful product. know open AI and all the, model providers, the foundation model private providers are just going to constantly release better models. And so your product's going to continue to catch that wave of, Hey, a new model is released that is better. And so then you can apply that new bottle. ⁓ yeah, like three was released recently with like four mini three can now do do reasoning on visual.
images. ⁓ I don't know if that's super relevant to you, but like I saw an example of somebody playing GeoGuessr. Do know what GeoGuessr is?
Derek Boman (21:19)
No, I haven't heard of it.
Sani Djaya (21:20)
It's a, somebody took Google maps data, like all the photos of like the street view and made it so that it'll randomly put you somewhere in the world. And you have to be as close as possible of guessing where you are in the world. And so it's a, it's a, a game called GeoGuessr using like the Google maps data. Yeah. And so what they did is they screen-shotted where, what GeoGuessr gave them, send it to 03 and it's like, where am I? And then 03 would come back and it would be
Derek Boman (21:37)
I have heard of this, yeah.
Sani Djaya (21:49)
pretty good, like pretty good because it's able to like take this photo and then you can see in its reasoning. It's like, it's zooming in to ⁓ like the the license plate on this car. It's like, this license plate is common in this area of the world, right? It's like, these trees common in this area of the world. And it would get pretty good. Like it was really good. So it's really interesting, not super applicable to your area in terms of like image reasoning. But yeah, the models are just going to get better. You're just going to continue to improve it.
by just plugging in the new model. ⁓
Derek Boman (22:22)
Yeah, and
in our case where there's like mathematical reasoning and which has been a constraint on the AI front where they haven't been great at math in a lot of cases, which our product absolutely depends on where there's things that we can rely on LLM for and other things that we need to use like Python code because it's more deterministic. And when you're talking about...
math where there is a particular right answer, there's no room for it just to hallucinate on. But as these get better at reasoning, at mathematics and things like that, like you said, we'll just benefit and write on the backs of that. The way we think about it is if we're the orchestrator of when to use the right model at what time in order to do not just.
Sani Djaya (22:49)
Yeah.
Derek Boman (23:10)
one particular task that you can just zero shot into the model, like, where am I in the world here? But it's where am I in the world? And then what would I have for breakfast? And then where would I stay that night? And, and, and, and, and, and, where when you're talking about workflow management, then that's where I think these kind of vertical SaaS applications come into play, where it's not just one task that a model can accomplish. It's what is the combination?
of tasks that is needed to be done for that particular job, which is the right model to use, what is a prompt optimized for that particular workflow in each of those things, and that SAS platform, that next gen SAS platform becomes the orchestrator of that entire workflow, not just one particular thing that a model may or may not be so good at.
Sani Djaya (23:45)
Yeah.
Yeah.
Yeah,
yeah, I mean people people, you know a year ago two years ago even still a bit now are like, you're all just rappers But it's a really thick rapper that actually adds a lot of value, right? ⁓ Do you know what the chocolate Ferrero Rocher, you know what I'm talking about? Yeah, that's what I think of like you have that core which is the foundational model But if it was just the core, it would just be a peanut
Derek Boman (24:17)
Yeah, yeah.
Sani Djaya (24:26)
But all the layers of the forrero rocher make it super delicious. It has the wafer and then the chocolate with the peanuts on top. That's what all these vertical SaaS applications are building. The foundation model is just the peanut in the center, and then everything around it is the deliciousness that adds to the whole thing. ⁓
Derek Boman (24:44)
Yeah. Well, I think like a year or two ago,
that terminology was like very kind of degrading of like, this thing is just a wrapper. Where I think the op, like almost the opposite is true now where it's like, where a bunch of people thought that the value would accrue at the model layer. But now we've seen like those models, like initially there was kind of one big player in town, OpenAI, but then
Sani Djaya (24:50)
Yeah.
Yeah.
Yeah.
Derek Boman (25:11)
Now these days it's like, like you were saying, I used the Gemini deep research before I ever used it on the open AI. And it was like my first exposure to it and thought, this is fantastic. This is amazing. And where they're all kind of neck and neck, where if you look at the benchmarks, it's like, it changes every single week where it's more of a, like, I don't think anyone's going to care down the road. it's, if a application
Sani Djaya (25:21)
Yep. Yeah.
Derek Boman (25:36)
at the application layer that is optimized for their workflow does what they need it to do. They're not going to care what's under the hood. In fact, what could be under the hood could change from week to week or be routed to a task optimized model where in that sense, it's much more of a commodity or less valuable, you know, and that's what we think at Socratic for our particular use cases, the application layer, the user experience is,
what we're really bullish on or where we think like over time the value starts to accrue.
Sani Djaya (26:13)
Yeah, it's like, do users care if it's AWS or Google cloud or Azure? Do, do they care if you're using react native or you're doing it on iOS versus and, or Android, like on native, right? Do they care if it's Ruby or react? They don't care. They just want the value and like the UI works well and the data is accurate. All this kind of stuff. Yeah.
Derek Boman (26:19)
Yeah, exactly. Yeah.
Sani Djaya (26:39)
Cool. ⁓ Well, that was awesome, awesome, great like deep dive there. I'm curious now moving on to like what kind of like AI tools and products that you've really enjoyed and like your teams are kind of like adopting that has been really beneficial.
Derek Boman (26:55)
Yeah. This is a constantly evolving space, right? And there are new tools that come on the market every week or things that we're trying or tinkering with. And what we're using today may be totally different by the time someone even listens to this podcast. there, there a lot of, of shifting sands in this area, which is exciting. ⁓ but also like you've got to kind of try and keep on top of it as much as possible, but.
I'll give you a few examples of ones that we find particularly beneficial as a small startup team where a couple of years ago, what we are able to accomplish given a small team of about six people versus what we might have been able to accomplish a couple of years ago with that same team, totally different. So on the code development side, obviously we're using tools like cursor. ⁓
Open AI and others to generate code or review stuff or find bugs in our software and just work a lot faster on the go-to-market side Copywriting is something that is like they're excellent at and you can refine and do marketing copy or positioning emails website copy Yeah, there's just like so many things where I feel like I'm ten people now and I'm doing the job of what would have been
10 people. My background by ⁓ trade is in UX design. And that's a, it's really fascinating what is happening there on like the prototyping front where you're not only just prototyping something that looks and feels like a real application, but is actually, you know, live written react code or something like that, that you could put in front of someone and they can type into an input field. They can.
you know, try a different slider and things that would have been really difficult to prototype in, say, Figma, or you'd have to kind of fake it. You can, with just a prompt, prototype and mock those things up. Now, to get it beyond that into something that is like, into production, there's still a ways to go on tools like that, like, like Lovable or Bolt or a handful of, there's probably like 20 of those right now. And so,
We're really excited by all these things that just make our team much, much more efficient and the way that we're able to run as a kind of very lean startup. But like I said, what we're using today is probably not what we're going to be using maybe even two or three months from now.
Sani Djaya (29:26)
Yeah.
⁓ On the lovable prototyping front, ⁓ what have you found as useful workflows in that area?
Derek Boman (29:47)
Well, right now it's kind of for me is like experimentation. It's kind of an experimentation lab kind of phase where, where I could see it going is similar to where we saw prototyping tools evolve in design. Like 10 years ago, there was a similar, like in 2015, there was a lot of prototyping tools coming onto the market where there was like Envision and Marvel and Figma was just getting started.
Framer was something totally different at the time. And every week there was a new prototyping tool where you'd have to initially download your screenshots from Sketch or whatever we were doing UI in at the time, upload it into like InVision. Like the workflow was just kind of janky. And then that's where Figma came in where they combined the UI design and the prototyping in one tool.
And I feel like now it's the UI prototyping and front end code in one tool. That's pretty awesome. Where I can just click publish and it's on a URL and I can send people to that URL and say, go try it out. So that combination is really compelling, but it's like to collaborate with my team. Yeah, they integrate with like GitHub and things like that. But to then say, okay, we put this in front of a customer. They could click on the link. They could give us feedback and we could prototype and
see if we're on the mark, now go put it right into a workflow. That handoff is not quite there yet, in my view. It's probably not far off though, like in a year or two, I think that's probably solved where, okay, hey, we validated this prototype and now plug it fully into our backend and all of that and it's ready to go.
Sani Djaya (31:22)
Yeah.
Derek Boman (31:40)
I mean, we'll see where that evolves, but I think that's definitely where it's headed.
Sani Djaya (31:43)
Gotcha, gotcha. Yeah, similar-ish experiences. I think with Lovable and VZero and stuff, there's like two core things that I still don't think it solves as like a product manager or like early designer, like exploring a solution space.
Well, not early solutions, but like two core things that I think it still doesn't solve. One is the one that you talked about, which is like, once we validated and we know that we want to go this direction, how do we hand it off to engineering unsolved right now, especially with like an existing product, right? You have an existing product and then you have this lovable prototype. We're like, this merging isn't like easy, right? And then the other one that I have found as an issue is all of the tools. It is one prototype.
Derek Boman (32:18)
Yes, yes.
Sani Djaya (32:31)
But when you're designing, you're often looking at multiple designs to look at like the pros and cons of the different ones and compare contrast. But right now, a lovable V0 Replay, it's all like, here's the one implementation. And if you want to do it again, you got to like create a new project and like start some scratch again.
Derek Boman (32:46)
Yeah, or I haven't gotten deep into this. I think they can do it to some extent, but not fully what you would need it to do to really build an application is like multiple or many, many, many screens, right? Where you may be able to mock up, like last night I did one that was kind of a modal.
Sani Djaya (33:00)
Yeah, yeah,
Derek Boman (33:05)
Pop up that the modal could pop up you could turn on some switches you could enter some information or toggle some things and then close the modal But that's only part of a workflow, right? So you could prototype that particular interaction But how does that fit in a larger application like that's many many screens and and that kind of complexity I don't think they're prepared to handle quite yet
Sani Djaya (33:25)
Yeah.
Yeah, yeah. So it's fun. It's interesting. Still some some things that it's lacking. The the one company I'm starting to explore that's in the same space is called Magic Patterns. And they do have this thing where you can have multiple designs side by side. It's all written by code. And then they'll just allow you to be able to view them like side by side. And I was like, that's interesting. I want to explore more about it. I've only really played around with it a little bit, but
Derek Boman (33:50)
Hmm.
Sani Djaya (33:59)
highly recommend to check it out for that one piece of like, I want to explore multiple versions. The other piece of like, OK, we now want to do this, head off to engineering as far as I know is unsolved. ⁓ Cool. Well, ⁓ it's been really great chatting. And so I just have a few last ⁓ kind of closing questions. And so the first one is, where do you think Socratic's AI will be in like six to 12 weeks or sorry, six to 12 months from now?
Derek Boman (34:29)
Yeah, this is an evolving space and we're in a, we've got a number of early partners that we've onboarded and we're ramping up with some that have been on our wait list for a little bit where every week we're adding new customers to our platform or people to try it out and give us feedback and we'll continue to evolve it. Every week we've got new features and we're increasing in the accuracy of what we're able to do as well as like the
depth of the analysis that we're able to do. So in six to 12 months, I would expect us to go from three statement modeling and forecasting to a deeper analysis of a lot of the underlying line items or the type of thing that a banker would do, frankly, in an &A transaction. We can cut out a lot of the tedium and cumbersome work at the front end of that, but they're...
Sani Djaya (35:26)
Yeah.
Derek Boman (35:27)
they're likely gonna have to take it down to Excel and finish off the rest. Well, over time, we want them to go longer in our tool natively to support more and more of the types of things that they would do in Excel before they have to take it down or put like the final touches on it.
Sani Djaya (35:45)
Yeah, yeah, yeah. And then if you could solve like one major roadblock in getting there, what do you think it would be?
Derek Boman (35:57)
That's a good question. mean, this is one of those things that just we're dealing with a lot of variable data. there's like an infinite number of ways that a small business can structure their financial statements. And we get better the more that we see or the more that our platform interacts with that variety of data. And so just as we bring people on, that removes or that starts to remove that.
roadblock incrementally because we've just seen more stuff and it gets kind of better and better like any machine learning system or other kind of AI platform. And so that roadblock is, that's part of why we exist is we make that easier for people. And the more that we see, we just get better, but it's unpredictable. And there's just a variety, an infinite amount of ways that that could possibly be. And there's no way that we've seen it all until we see it.
⁓ But that's what also makes it fun and exciting and ⁓ kind of a challenge.
Sani Djaya (37:02)
Cool, cool, All right, ⁓ second to last question. ⁓ What are you most proud of and why? And it can be personal or professional.
Derek Boman (37:14)
I mean, I'm very much a family guy, so I'm gonna go with my family there. ⁓ I've married, I've got two boys, 10 and seven. And yeah, I'd say most proud of my family.
Sani Djaya (37:30)
That's awesome. That definitely does come up. It definitely does come up. just like, I think, actually if I think about it, for the existing podcast guest, it's definitely, I do see a trend of the people who are older, say family, is the thing that they're most proud of.
Derek Boman (37:52)
Well,
we grind on our startups. You know, we are excited by this technology wave and everything that's happening right now. We have to balance that and ground ourselves in the things that keep us going. And for us, you know, that's our family or that's our loved ones or whatever it may be. Like it's an inspiration to do what we do. But also a reminder that, you know, at the end of the day,
Startup's great, technology's great, ⁓ but there's more to life than just that, right?
Sani Djaya (38:29)
Yeah, absolutely, absolutely. All right, last question is, where can people find you online if they want to learn more about you and Socratic's AI and how can listeners be useful to you?
Derek Boman (38:43)
Yeah, so if you're in this space, if you are a buyer or a seller in M &A or you help private companies fundraise or finance their businesses, we'd love to talk to you, particularly if you work in the lower middle market. And our website is Socratic.ai and you can find me on LinkedIn, it's Derek Bowman and we have a company page, Socratic.ai on LinkedIn as well. So I would love to talk to you there.
Sani Djaya (39:12)
Awesome.