Desmond Fleming hosts visionary business leaders who share insights on how they built their companies and how venture capital made it possible.
[Desmond Fleming] (0:00 - 1:05)
Well, Ed, I'm so excited to have you on to talk about your company, Tolt IQ, formerly Diligent. Yes. IQ.
And before we talk about the business, why don't we frame up for the audience who you are, your career, maybe to start, let's talk about what your experience, I think 10 years or so at KKR. Yeah. Unpack that a little bit.
So I got lucky joining KKR. I had just come off, I had built my own business and sold it in the early 2000s. I was doing consulting work and I actually got this random call from a recruiter looking for recommendations on people who might be interested in this new CIO job that they were forming at KKR.
And to be honest, and I understood broadly what private equity was, and it would be hard not to know who KKR the brand is per se at some level, but I didn't really appreciate it. And this is in the early 2000s.
[Ed Brandman] (1:06 - 1:06)
Yeah.
[Desmond Fleming] (1:06 - 1:43)
So it's like 20-ish years, 15 years in. And KKR, right. This is, I'm now effectively two thirds of the way through my career.
Yeah. But I spent my career in capital markets, trading technology, operations. I'd never really spent any time in the private equity world or what we all now know is the private markets world.
So everything for me was the liquid markets and that I had built solutions and worked with business people in trading, in sales trading, in capital markets. And so the funny part of the story is when I was telling someone I was trying to come up with names for KKR, they're like, you should put your name in that hat.
[Ed Brandman] (1:43 - 1:45)
And I was like, hold on, wait a second.
[Desmond Fleming] (1:45 - 2:42)
I feel like if you could have a career for 10 years and you could be a KKR versus almost any place else, you want to be a KKR. So that's when the light went on and I called up and I'm like, OK, put my name in the hat. And she was very accommodating and put my name in the hat.
Yeah. Fortunately, I got the job. The day I got the job, I was sitting at a bar with a napkin and I remember the recruiter calling me and she's like, where are you?
I'm like, in a bar. She's like, why are you in a bar like today? I'm like, well, you're either going to give me good news and I'm going to drink or you're going to give me bad news and I'm going to drink.
So she gave me the good news and I joined KKR. And look, in 2007, KKR was still a private firm, largely focused on private equity. There was a credit business out in California, but it operated reasonably independently at the time.
It's now fully integrated into the firm these days. And the technology team was five guys and a screwdriver. Wow.
And this is probably, I'm guessing, AUM of 50-ish.
[Ed Brandman] (2:42 - 2:43)
Not even.
[Desmond Fleming] (2:43 - 8:50)
Under 50. I think the AUM was maybe 30, between 30 and 40 billion. Which is still a lot.
Still a lot, but like nowhere near where the world is today. But look, that was a, they were the marquee brand, right? Along with Blackstone and Carlisle and TPG and a handful of others.
And at the time, the technology team, again, was literally like a five-person team, mostly focused on the support of the firm, right? Like keep the email running, keep the networks running, keep the video conferences, calls. I lost more times to video conferences than I won when I was building out technology there.
But it was a private firm, right? And it was a private firm with basically three flagship funds. One in the US, one in Europe, one in Asia.
The mission that I didn't fully appreciate until I got there was Henry Kravis and George Roberts were putting the company on a course both to go public, but also to grow its capabilities in terms of all the other asset classes that they wanted to be in, right? They wanted to be in a position where they didn't have to say no to a client looking for some form of permanent capital, private equity fund capital, credit capital, real estate capital. I mean, this was the beginning of the journey.
I had no idea where that journey was going to take me over 11 years. But over the course of those 11 years, the company went public. We went from 30 to 40 billion in AUM until I retired in 2018 when it was a quarter of a trillion dollars in AUM.
It went from five offices to, you know, 25 offices. And my technology and operations team had scaled from five people to about 145 people at the time. And so it was a very significant growth.
And I was very fortunate that over my tenure there, my role expanded. So it went from being chief information officer and kind of having responsibility for the entire technology organization to also taking on responsibility for credit operations as a business support function, as the growth of the credit business expanded, and then ultimately included private equity fund operations too, right? So how do we onboard funds?
All the data that goes with that, the investor information, there's a lot of support activities that go into onboarding funds and onboarding clients and providing support. On the credit side, operationally, very different than in the PE world, right, where you're buying an entire asset and you're kind of custodying that asset, but the asset is an entire company for the most part. And the credit side, you're obviously dealing with quasi-tradable assets, depending on where you are in the spectrum of private versus public credit.
And so I also got firsthand opportunity to provide diligence support when we were buying companies. So the private equity teams brought me over the wall. I got involved, obviously, at a large firm like KKRA.
You have lots of outside advisors that also come in and help you with diligence, financial diligence, commercial diligence, tech diligence, tax diligence, all these things. When you're writing substantial checks, you're going to spend a lot of time on diligence. And that diligence invariably begins in a virtual data room where all the documents are being loaded up from either the bankers or the company themselves.
And at that time, were you all using interlinks? The two dominant players in the market, and I think continue to be the two dominant players on the market today, are interlinks and data site as the two principal firms. We, when I was at KKRA, the PE firm doesn't pick the diligence provider, right?
The banker and the company do. And so, but what happened up until two years ago in any diligence exercise, and I went through multiple of them, whether the company is a technology company or tech is a significant part of the story, what you're actually doing is you're gaining access to that VDR, which has been highly structured, set up on a secure basis. But most investment professionals can't live and work inside of the data room.
They need to live and work in their own office suite. And I mean by office suite, like Microsoft Office Suite, right? They need to get the data into Excel, into PowerPoint, into Word.
They need to deconstruct the confidential information memorandum, the lease agreements, the insurance policies, the client contracts. But up until this world of generative AI, that was an 85% people problem and 15% technology. And I say that because everything was document based, right?
It is the singular defining feature of a virtual data room. It is not, for the most part, highly structured data. It's all the documents that support the business.
It's unstructured data in a structured package. That's exactly the way to think about it. By the way, if you're lucky, lots of times you get the unstructured data in highly unstructured folder structures that randomly get updated on different days as you're going through the deal.
And so the larger the deal, the more complex that task is, the more people you need. Because again, if you're writing a large check, even if you want to get to yes on this deal, there's a lot of yellow flags and red flags you either need to clear out or you need to have a way that you're going to address it and solve it going forward. And so after 11 years at KKR, I happily retired and got in my car with my dog to go visit national parks, which you can follow in my LinkedIn profile.
And then my oldest son who runs cybersecurity at Duolingo called me up and said, I think generative AI and open AI and chat GPT are going to change the nature of work. And did he call you, this was in 2024, 2023? This is the end of 2022.
So post chat GPT. Chat GPT 3.5, I guess, had just been released with maybe 4,000 token capacity. And even then he had said to me, he's always been on the engineering side.
And he's like, I think this is going to change the nature of work. And that's actually funny because Duolingo is one of the more AI native companies currently in our ecosystem. So I had no appreciation that he was...
He's on the leading edge. He was on the lead, but I didn't know that. I was coming back from a national park, right?
And I was like getting the dog in my backpack, you know.
[Ed Brandman] (8:50 - 8:52)
And you want me to get out of retirement? What are you doing?
[Desmond Fleming] (8:53 - 13:59)
Really? Like I've been 27 years on Wall Street and survived and I'm still married. Like, isn't that enough?
That's a win. And you're doing well. You got a job at Duolingo.
Come on. Right. I'm like, come on.
So he's actually still at Duolingo, but he really helped me think through the architecture and the design of it. And then I was fascinated by how this could transform diligence because diligence has this unique scenario, which is the virtual data room is the source of knowledge for the documents, right? That is a problem that remains challenging in most enterprises today.
So in most companies that want to overlay AI onto their corporate repository of millions of documents, when you think about the challenge with that, it has to do with all the variations on a theme that exists for every document you've ever worked on. And so there's five versions of the financials and seven versions of the client contract and two versions of the presentation. And there's nothing wrong with that in a corporate environment.
Different people have different needs. Environments are zero trust. You don't necessarily know what the person in the other group is working on, even if it's related to the same documents.
But that's not a great state for the unstructured documents to be in because at least where we still are in 2025 with all the amazing innovations that have occurred, recency and relevancy are so intertwined in getting accurate information. And relevancy impacted by recency bias, which I think is a good thing, is very hard to know in a corporate environment when you don't know whether you actually have the most up-to-date version of the document. Yes, especially the larger the organization, the harder it is.
And that's, look, that's the natural structure of orgs, right? But when you want to find that needle in the haystack or you want to correlate information or dive in deep on a topic, in the world in which we're currently in in AI and in that world where the knowledge does not reside in the training data or in the web, right? Those are probably the two principal bases people have historically thought about it.
But the language that people use to describe anything proprietary broadly is RAG, right? And RAG has lots of definitions related to retrieval augmented generation. But the thing that is so interesting about diligence is that you can apply all this technology over a known quantity of documents that you can have high confidence is the right set of documents.
Maybe somebody only produced three years versus five years of financials or they only gave you 20 of the top 50 customer documents. But what you all agree on as parties to a transaction is that virtual data room, whoever is hosting it, that's going to be the repository. This has all the context you will need to execute your task or job.
You may not know where it is within the folder or the documents, but you do have high degree of confidence that it is either here. And if it's not here, you're going to go back to your counterparty. And ask for it.
And say, hey, I need additional information. We dived into some component parts of what you're doing. But to zoom out a little bit.
Yeah. What is TOLT? So TOLT is the new name for our business, TOLT IQ.
The business started as Diligent IQ. As you can well imagine, there are a lot of people chasing business around the use of the word diligent. I am not the biggest of them.
And the story behind TOLT. So look, we started the brand as Diligent IQ. It was an easy brand for people to recognize and get started.
But we're maturing. What's that unique brand identity? And so we started thinking through that.
And back when I was at KKR, already deep into my career, and a partner. I was sitting down with the partner who was my boss at the time, Todd. And we were going through my year-end review.
And in the midst of the meeting, he was like, so Ed, do you know what's unique about Icelandic horses? Not something you expect to be. Yeah, that's going to throw you off.
And look, there are lots of smart people at KKR. Like they ask challenging questions and do interesting things. But I didn't see that one coming.
Yeah. I did not have the answer. I could get through, you know, Trot, Gallop, and France.
Like there's these three standard movements that horses have. And so here we are at the year-end review process. It was always an intense process at KKR.
And my boss, Todd, says to me, so did you know that Icelandic horses actually have these two additional movements, the tolt and the flying pace? And I looked at him and I said, are you trying to tell me I need to find extra gears to do my job next year? And he smiled and tacitly acknowledged that's what he was asking.
This story always stuck with me. Look, I loved the high performance and the energy at KKR. It was probably one of the most challenging and rewarding jobs I've ever had in my career.
Being told to find another gear, I guess I should have expected it.
[Ed Brandman] (13:59 - 13:59)
Yes.
[Desmond Fleming] (13:59 - 14:17)
But I didn't see it coming. But it stuck with me so much. And then when I did research on the tolt and the Icelandic horses and the fact that it really gives them additional capabilities and they're isolated to the horses in Iceland, probably developed over kind of isolation relative to other animals, right?
[Ed Brandman] (14:17 - 14:18)
Yeah.
[Desmond Fleming] (14:18 - 14:39)
And so this idea that there's a movement of the horse that smooths out your experience. Yes. You think about Icelandic is this rocky, ice based.
So I was like, on top of kind of the name itself is cool. Like the underlying definition of tolt and what it provides for the horses. And we're trying to smooth out the diligence process.
I was like, well, that's a cool combo.
[Ed Brandman] (14:39 - 14:39)
Yeah.
[Desmond Fleming] (14:40 - 19:44)
And we kept our logo the same. And it's actually been really well received. I had lots of people from KKR reach out to me and laugh.
Yeah. And they were like, yeah, that sounds exactly like what we experienced. Look, I think it's surprisingly fun.
Yes. And an under discussed part of, you know, company building is obviously marketing and B2B marketing is, you know, I would still frame it as it's in the dark ages. No one really does it well, but there's lots of ways, depending on your patina, where you guys could take a tolt.
Yeah. So we re-rendered, we relaunched and if any of my customers are actually listening, we planted an Easter egg somewhere on the homepage of the platform for our actual clients to discover something unique about Tolt. Great.
Great. Love that. But so as a business, Tolt IQ, what do you all do?
What is the one liner of Tolt IQ? Yeah. So the really simple version of it is we think we've built a moat and a unique way for clients to analyze VDR content.
And the challenge in VDR content is often even more complex than corporate content because oftentimes the nature of the documents in a VDR are legacy documents in an organization, right? And so focusing on diligence within the construct of private equity or real estate or infrastructure, as opposed to, let's say, the venture world where companies are newer, everything is largely going to be fresh from a document standpoint, even several years fresh. But if you think about buying a company that's been around for 20 or 30 or 50 years, oftentimes you're going to get legacy contracts that go back 5, 10, 15 years, right?
Maybe even 20 years. You're going to get lots of scanned documents. You're going to get lots of documents before the days of nicely clean formatted Adobe documents.
You're going to get financials that are scanned because that's the final version of those documents. And so you really have to put, I think, a lot of energy into understanding documents and understanding older documents and complex documents in order to, over the type of service that we're offering. I've probably already extended my two minute window of kind of what it does.
So I should go back and go. So look at the very simplest version. It's, I've got a VDR. I've got a lot of documents. I need to get started and find answers to my problems. What's a green check mark? And I don't need to think about what's a yellow check mark.
And I got to go deeper on what's a red shark mark. And I got to really be concerned about. And some combination of the content inside that diligence room is going to help you get there.
Now you're likely going to augment that with maybe expert network calls that you do. You're going to augment that with research reports. You may even augment that with information from competitors related to the product roadmap.
But at the end of the day, you're looking to organize and digest a significant amount of information to get laser focused quickly. And in a pre-generative AI world, that is 100% human problem to solve. And there's very little technology that can actually augment it.
And so we have built out an infrastructure that allows you to effectively drag and drop that VDR or sub portion of it and then get started. And get started so that you can really accelerate how you do the diligence. And hopefully the net result of accelerating how you do the diligence, it then opens up the doors for you to do more things that you might not otherwise have gotten.
And do I start by defining to your platform what I'm looking for? Or is that already pre-configured? Yeah.
And so the way to think about it is I don't know what you're about to dive into and why. You may not know what you're going to dive into and why because you might not know all the documents that are in the VDR that just opened up. And so what we do spend a lot of time doing is as the client uploads the VDR or sub portion of it to our platform, we're instantly and on an automated basis trying to categorize the documents.
We're trying to tag the documents. We're trying to add all sorts of component additions to the document itself when we store it such that you can then go ask questions about your financial docs. And it wouldn't matter what folders they were buried in will point to the right financial documents.
If you're looking for the legal entity structures, we're going to know the documents that have legal entity structures. So we spend a lot of time on the inbound side, but in a completely automated way, analyzing your documents on the way in the door so that we can pre-organize and structure them for you so that you can have, you can talk to the whole blob, but maybe you want to have a restricted conversation with just the financial docs or the tech diligence docs. We will automatically identify all the tables that are in the documents and turn those into Excel files for you to get started.
Oh, wow. That's a huge pain.
[Ed Brandman] (19:44 - 19:44)
Yeah.
[Desmond Fleming] (19:44 - 21:35)
It's a painful process if you ever lived that associate life, right? Lots of cutting and pasting. We will take charts and graphs, and we will use vision models to create synthetic representations of that chart and graph so that you can talk to the chart and graph as if it were a text or a table on the page.
Yeah. So actually in the product interface, you have some sort of chat experience. What's a bit unique about the TOLTiQ platform is it's geared for collaboration on a private markets team.
And by the way, even at large firms like a Blackstone or KKR or Brookfield, there might only be four or five people on that team, right? And then you augment that with additional resources outside. And so one, it's a collaborative space, and we think about everything from a deal construct.
So you're starting a deal, and you're loading up documents, right? Those deals are holding pens for your VDR content and whatever else you're going to augment it. Maybe you've been on three expert calls related to the subsector you're working on.
Maybe those are audio files, not transcripts. Drop those into the room as well. We'll turn them into transcripts, and it'll become part of the story that you can interact with.
We then spent a bunch of time when we launched the tool at the very beginning, and we've iterated on it a couple of times. We also think that firms have patterns of behavior in terms of how they go about diligence. Yeah, I've talked about this with others and peers where each firm has a DNA in terms of how they, one, approach and think about deals, but also, two, go through the process of diligencing them.
Yeah. And so we want to respect that and recognize that everybody's unique and different. And so what we present all our clients with is a library of prompts that we think apply generally to most functional areas that you diligence.
But then you, the firm, can build up an additional library within the Tolt IQ platform.
[Ed Brandman] (21:35 - 21:35)
Yeah.
[Desmond Fleming] (21:35 - 37:34)
And so there's actually three layers of prompts. There's the firms that Diligent IQ provides out of the box, well-tested, utilizing prompt engineering, will work across any type of VDR you upload. Then there's the prompts that you as a firm feel are kind of the benchmarks that you want everybody in the org to utilize because it's unique to the things that's to your DNA.
Which could that be for FirstMark, for me, for any deal that I look at, I always care about, tell me the 12-month, six-month, and three-month compound monthly growth rate. Right. And so you're going to construct that into a question that's going to be loaded up into the Tolt IQ library.
And then anybody at the firm has access to that well-formatted question. So whether the information came through in a SIEM, whether the information came through in an SEC document, whether the information came through in scanned documents that got sent over from the finance team, we can answer that question. Yeah.
And so we find that clients are building up a library that effectively represents the DNA of how they solve a problem, right? Maybe one infrastructure firm thinks about these issues related to a pipeline and another infrastructure firm thinks about something different related to the pipeline. That's fine.
And then the third tier is, I think of it as the experimentation layer. You as an individual user of Tolt IQ are going to have hundreds and maybe thousands of different questions that are going to allow you to drill into some problem that you're looking to solve. Those don't need to be shared with the firm.
Those don't need to become standard. But you should also have your own kind of personal library of things that work for you. Right.
So we think that's an important part of the puzzle. And then what we've tried to do inside of the platform is think about, and for us today, it's about five to six different functional things you can do with your documents. So you can be in a conversation with your documents.
That's a back and forth, not dissimilar to how people think about using ChatGPT or Claude or Gemini, right? You have their front end tool, but you're limited in terms of how much information it's going to analyze up until the most recent changes where some people can now connect their enterprise data to it. But it still has its own limitations.
But maybe you're not in a conversation with documents. Maybe part of your diligence work is going deep into analyzing insurance policies or extracting a set of loan terms from a series of documents. That's a totally different flow where you're looking to extract information.
And models today are not great at, let me just dump 50, 60, 100 documents in and try to get a forced answer from every one of those. So as opposed to the traditional chat, we have the concept of what we refer to as a bulk query. You're looking to get information out and really turn it into structured data that you're going to work with in Excel.
So that's a common tool in the system. And is that bulk query just visualizing that experience? Let's say I'm looking at 60 different instances of an insurance agreement and I want to know some sort of exception or carve out.
Is that exposed in TALT? And so that exposed in TALT is the idea that you might say, hey, show me the limits of liability in each of these 35 insurance policies I have, what the renewal date is and what the premium is. The reality is you're asking all those questions because you want to do something in an Excel spreadsheet, right?
You're looking for the anomaly. You're looking for pattern. You're looking to do some comparative analysis.
So in TALT IQ, you can either turn around and run that bulk query and just drop it into Excel or you could get the results and you could start a chat with a subset of those documents and say, hey, these five that have this much higher limit of liability, let's go into a conversation with those five documents and go deeper. When I'm pulling it out into Excel, is it a pre-formatted table? Yes, you're going to have the ability to have the questions you asked, have the Excel columns called, whatever you want it to do, and have a nice clean row and column output that you can then manipulate.
Got it. And so there are tools like that. We offer a tool specifically focused on Excel and CSV.
You hear everybody talking about the amazing capabilities that these models have to write code and you have a firm like OpenAI that's got their code interpreter and their data analytics tool. And so we enable that for you to work natively with Excel and CSV files. We feel like that's code writing, right?
And so it's actually writing Python code against the data that you provide. And so we have a bunch of different tools that we think are all related to diligence. Another example is there's a primary feature that says, hey, if you've got a playbook of 45 documents you always expect on a VDR when you're looking at a deal, you can create that custom list yourself as a firm.
And then every time you get access to a VDR, any one of your deal teams can do, let's run a playbook and compare the documents we're expecting against the documents that were provided. And it'll use the AI to give you a list of potential matches and leave the human in the loop to verify these are the right documents that were included or excluded. So we try to provide a series of features and capabilities that we think are just critical to moving faster and enabling the end investment professional or the ops people that are supporting them to get to those topics that otherwise you're always time constrained in diligence.
And so, look, we all have our natural biases around the things we're going to look for. So if you can have AI augmentation help you look for the things you're looking for, then all of a sudden it opens up the potential of hours a week or possibly days a week where in this relatively confined time frame of making a decision on a deal, you can get to yes or no faster. You can dig into a series of topics that you just didn't have the capacity to do 18 months ago.
And I think every investor, regardless of asset class, they just want to get to two points in the diligence process. They ideally want to get to ground truth, but I also always call that of going into the deal completely eyes wide open. So knowing both, I think investors a lot of the time know more of the merits of a deal going into it, which is why they've gotten to the diligence process.
And it's more about knowing, okay, where could this blow up? And truly knowing that ahead of making some final investment decisions. And not least of which, right, invariably is, again, for more sophisticated deals, okay, I got a SIM.
The bankers produced the SIM. It's going to tell the best story possible of this business. It's going to talk about the fact that they're market leaders and they're one, two, and three in each one of these sector spaces.
And here's the baked growth rates compared to some selected group. So, and then all of a sudden you get access to the VDR and you've got the actual financials and you've got the actual customer contracts. And look, it's not in a pre-gen AI world.
You couldn't solve that problem. You saw that problem spending lots of hours of every day and having support teams. And so now all of a sudden I can load up the SIM and I can load up the additional documents and I can say, compare this particular view presented in the SIM against the actual contracts that get signed.
Is this consistent? If I'm buying a company that's really focused on sustainability, right? They're making these claims about their sustainability goals.
Do the actual results that they produce in their report indicate that they follow that? And if I've got three years worth of those sustainability reports, are they actually getting better or are they getting worse? Because every year they're going to probably tell a great story and they're probably going to not tell the story from last year that they don't want to talk about.
So I think the opportunity to just go both wider, but also deeper on lots of topics. Again, because the time window isn't going to largely get expanded, right? So you're always going to be constrained to, I signed the NDA.
I'm planning on funding in date X. I've got either exclusivity to do my diligence or I'm confined. And this is a competitive bid situation.
I've got to get the work done. And so our view is, how do we really augment the precious time that an investment professional has to get through this process? How different do you think this time period is even from, you started at KKR in 2007, but previously were at JPMorgan.
I grew up in the age of Blackberries and the birth of the internet in the 1990s. I think not the hype, the promise of AI, even in its current state, feels to me like a moment, not dissimilar, but happening far rapidly with the birth of the iPhone in 2007. I could have never imagined when we migrated from Blackberries to iPhones, we would literally have a computer in our pocket with that capability.
It feels not dissimilar. I would say cloud and cyber, but cloud and cyber to me were incremental steps along the way. Whereas the iPhone was fundamentally a different way of interacting with the world.
Do you think this is a fundamentally new way of working? This is a new muscle memory for everybody. The way you talk to models, the way we go about solving problems in a constantly evolving state of model release that's happening, I think is brand new.
And I think people started using the tool as nothing more than, let me see if I put that Google search into chat GPT. But that's not really where the power of these tools are. It's one thing to see how smart the models are, and they are very smart.
It's another thing to see how these models start to interact with the web, with things like you're hearing about with deep research. And then there's a whole other aspect of how well do models trained on non-proprietary data deal with the fact that I'm now loading up proprietary data in my own system, and I'm going to give snippets of that proprietary information to the model to leverage its brain power to assess some problem it's never seen before with my content. And so that to me, the promise of hooking your brain up to the internet and your own content, I've seen some comments that I don't disagree with, probably if models didn't advance for the next, if we just stopped model development now, we'd have 10 years of unbelievable things that we could probably accomplish with them.
One part of me thinks around the continued improvement, even if models stopped at their current state or capabilities today. I think about as one, people like you who are putting scaffolding around the models to make them more usable for an end customer or client is one reason why there's a lot of value to come. And then I also think about another vector of that value is just exposing the models to data that was outside of their training set.
And I want to push on that latter question to better understand how you think about the accuracy that models can provide on top of data they've never seen, right? Because within any sort of data room or diligence process, there are some finite and noble answers. And then there's some answers that you get out of that process that are more, I don't want to frame them as subjective.
You try and get to objective answers, but they're objective answers that help form a subjective opinion. Qualitative versus quantitative answers out of the underlying documents. So look, I think what we have seen and been really impressed by, at least in the context of traditional business-level diligence across the functionality areas I described, financial, commercial, technology, tax, et cetera, operational, the model's ability to leverage its inherent knowledge about the world and to understand content, right?
Constrained within the context window that we all have become familiar with, 128,000 tokens, 200,000 tokens, a million tokens, if it's GPT-4.1 or Gemini 2.5. I have been really impressed with how the models understand content that not only are they not trained on, they're not allowed to train on, right? So that knowledge is provided temporarily to formulate an answer. And so it's using its knowledge about its training data with introduced content that it has for a fleeting moment to formulate a, hey, the user asked this question, what's the best way to answer that question?
It is clearly a lot more sophisticated than just a next token predictor. I mean, yes, it's simplest form, that's what it is, but it's so complex. What it's thinking about, it's thinking about the conversation I've been in and the last 15 questions that I've asked and the augmented knowledge and the guardrails that I'm providing it how to think about that.
And so I've been really impressed with how well the models do at answering questions when you don't fine-tune and you don't, so we do not train models. We do not allow models to train on our data. We don't fine-tune models.
Our view is we're going to continue to use the latest releases of the frontier models. OpenAI and Anthropic, we also use models from Cohere and we're experimenting with Google's model as well. All these model providers have to offer ZDR, zero data retention policies, which limits the number of firms that you're going to work with.
But look, I think there are probably subsectors of many industries where the models have seen such little information about some bioengineering type of project or genomics or an esoteric portion of the law where it's training knowledge hasn't made it smart enough to answer a complex question about a topic quantitatively. I think even... But can still probably do it qualitatively.
I think there's different types of questions where the models don't actually need to be that knowledgeable to actually give you good output. A foundation of the TOLTiQ platform, because it's diligence-based, is everything has to be cited. And so when people are looking at the responses from our platform in any aspect of our platform, they're always getting references to page numbers or hyperlinks that they can click on taking to that particular location.
I think over time, people will logically probably become more trusting of the information, but I think in a business context, especially when you're writing a check at the end of the day, those citations go a long way to building trust. And I think where people are headed with a lot of their use cases for Gen AI is, for the time being, we're in a trust but verify mode. Right?
Like I want to test the output. I can see that these firms have made massive headways on reducing hallucination risk. But if you're going to reference financials, you're going to tax policy, you're going to reference an operational or an org structure, I want to know the exact page and the exact document that you're sourcing that information from.
And so I think that that remains a critical part of building confidence in the use of AI is having a trust but verify approach. Right. One of the things that I think is fascinating about this time period is the potential for change it has within organizations.
And you kind of see these emergent behaviors already occurring. You and I have already talked about and referenced one of those new behaviors, which is chat with your document.
[Ed Brandman] (37:35 - 37:38)
That didn't exist prior to 2022.
[Desmond Fleming] (37:38 - 38:37)
There was no chatting with your documents. And now that exists. I would be curious to get your perspective on how you think AI can affect the organizational design of a company.
So if you thought about the private equity, private credit, venture firm, investment bank of the future, any company that has some sort of diligence component or continuous muscle in their organization, how do you think AI is going to affect those org structures as a whole? I think AI as an augmentation tool is going to become pervasive. And I think it's probably pervasive within two years.
So I think by the time we're in 2027, your associate, your principal, your director, even your partner on the go has asked has some type of access to an AI tool that you either sanctioned by the firm. So you're sanctioned by not saying, hey, I'm using chat GPT.
[Ed Brandman] (38:37 - 38:38)
Yeah, whatever.
[Desmond Fleming] (38:38 - 48:11)
This is like literally AI enabled tools will be in every firm up and down the structure. And it will do different things for different people, right? At the associate level, it's going to dramatically accelerate how they do a lot of their work.
And in some regards, they're going to have to be the most nimble and capable at using AI because they'll always, to some degree, be on the leading edge of what can AI do for me, right? How can I get it to analyze this information, create this new chart for me, do this type of simulation, extract this type of information, right? And the skills to do that will evolve with time.
And I think we'll constantly be changing around how do you optimally interact with models. Yep. I think the more senior you're going and you will go in an organization, AI will be there, but it might be masked a little bit more.
Meaning there'll be more out of the box AI tools, right? There'll be a button for summarizing a document that you just got. There'll be a button for looking at the financials where the more senior people will be using, will also be using AI.
They won't have to be as creative around how they interact with it because there'll be, I think you'll get a lot of benefit actually out of agent architecture with more senior people who will, people will develop tools that solve the problems that more senior people are looking to derive information from. What's an example of a problem? So let's say you're a senior person and you've gotten, you know, the last two months worth of reports from a company and you're looking to really identify what the differences are, right?
At a junior person, you might do a lot of iterating on looking for very low level minutia issues to solve. But if this was like a quarterly report that the company produced for their internal management team, that senior person probably doesn't need to get into all the minutia. They would be happy just having a high level comparison and letting the model decide, okay, let me take a quick look at the financials, a quick look at the commercial contract, a quick update on their debt financing and give a high level perspective.
That doesn't mean that what the junior associate is doing isn't critically important. It just means that, and you talked about, you know, you asked the question about how does this change organizational structure? We're going to be empowering everyone up and down the org structure to use AI, but in different ways.
And I think that that's fine. I think what it also means is you have to think about the transparency of your documents when you're doing business so that everybody's working from the same content in terms of operating. But I think that this will, in some ways, it democratizes a lot of activities that otherwise you need to rely on someone who's an Excel whiz, right?
Or a PowerPoint whiz or knows these documents inside and out. All of a sudden, you can offer people the ability to do analysis and gather knowledge without having to ask somebody else to help them. There is one strain and viewpoint of the world that is saying, get rid of the associate.
You don't need them anymore. Some people may go down that route. Who knows?
Another way of saying that what that world opens up is there's better and more robust decision-making overall because you'll have more viewpoints able to run and get the analysis they want from the sources of truth, but people will still view the opportunity set from different perspectives. You may have a room of 10 investors who have five of them say, we believe the market is going to decline for XYZ reasons, and here's the data that supports it. And the other five investors say, actually, we believe the market is going to improve for ABC reasons, and here's the data we have to support it.
And both of those things can be true at once. And what I think AI enables is all 10 of those people to be informed and be armed with the data with much quicker cycles. That would be my rosy view of AI adoption within firms.
So look, I think that's a completely reasonable way to think about some of the things that AI enables, much like we can now ask AI to do a red team analysis on a decision we've made on a deal. You don't have to like the outcome of what the model spits out with a red team analysis, but wouldn't it be useful to see, in theory, an unbiased view relative to how the rest of the firm thinks about that? I think that while the augmentation and the number of value-add things that the AI technology is going to allow teams to do will be significant, I don't discount at all, to be honest, though, the issue of what does this mean for the labor force?
Because at some point, the economics are going to kick in around, OK, if one person and AI can do the equivalent of what three other people used to do together, do I have the demand curve for additional work? Are there other things that we will get down the path of researching? Will this open up a wider set of opportunities for us to reach?
I think those are all possibilities, but I think invariably there is a big challenge coming within the next two to three years that as the AI tools and the reasoning further advance, in particular around autonomous agent architecture, you're going to have lots of companies trying to evaluate what does my workforce look like? What is the true productivity of a person with AI augmentation? What models or foundation model providers are you watching most closely as it pertains to them releasing the next stage or output of their models?
Yeah, so the two that we focus the most on is OpenAI and Anthropic. I think they both have a very healthy hybrid of consumer-facing revenue streams and feedback coming to them as well as corporate streams. And I think they both take, honestly, very different approaches to their model development and design, and they take different approaches to training as well.
What are those differences? So some of the differences are, for instance, if you're a free user of ChatGPT, which is amazing to be able to have a free solution, your data could be used for training as a free user. Now, once you're a paid user, you can turn that feature off.
In Anthropic's Claude model, even their free model, their free model is never used for training. It's like literally not an option, right? That's a fundamentally different approach.
OpenAI has published kind of their guardrails. They talk about kind of the system guardrails they're putting in place for their solution, and they talk about it broadly. Anthropic has more focused on this concept of what they refer to as constitutional AI, literally guardrail rules that they publish to everyone that they think tries to enforce the model being most helpful and least harmful at every turn.
And so, you really are seeing differences in approach. You've seen OpenAI lean heavily into their DALI model and the ability to do image generation. You see Anthropic leaning more heavily into, I think, the code side of it.
Now, OpenAI has leaned into that as well. So, we focused on both of them because I think that's the top of the house in terms of the AI wars right now. They're both obviously funded incredibly well by Microsoft, Google, Amazon, and others.
And they're releasing not only the most forward models, true high-end reasoning models, but they're also both trying to be very innovative on the user experience side, which I think is a, we all don't know what great looks like yet in terms of the user experience with AI. As in you will look to their design decisions to influence TOLT because you interact with them primarily through API. Yeah, I mean, look, we are clients of both, you know, we use their models, we use their tools, right?
So, I'm a chat GPT enterprise firm, I'm a Claude enterprise firm, I'm a Gemini enterprise firm in addition to building my tool. I just think that much the way that the iPhone's release and the design of iOS began to influence so many different people's approaches to user experience, right? Including in business, right?
It changed the way probably a Salesforce or a Workday kind of designed their apps. It gives us something like a Slack, right? Where everyone's so comfortable with messaging.
And so, I just think because you got so many people using chat GPT and Claude and Gemini in the non-business portion of their lives, they are going to influence the UX. I think there's a lot to be done on the user experience though. This whole notion that we're combining in some cases, we don't personally today, but there are firms that are trying to figure out how to integrate voice, how do you integrate text, how do you integrate images.
We have streaming components of AI. We have delayed components, right? You don't get a deep research report back right now, right away.
You got to wait 15 to 20 minutes. And I don't think you have... I mean, being able to get it back in 60 seconds would be awesome.
[Ed Brandman] (48:11 - 48:12)
Everyone would love that.
[Desmond Fleming] (48:12 - 48:20)
But right now, it's no different. If you're going to contract with McKinsey, you expect them to take minimum three weeks to give you a good output.
[Ed Brandman] (48:20 - 48:20)
Right.
[Desmond Fleming] (48:20 - 53:03)
So, now, great. You only have to wait 45 minutes max, right? So, that's amazing.
But my point here is there are all these different ways. We're all getting comfortable interacting with the AI world. And what we define is good, and what we define is great.
And I think that that is going to have a heavy influence on the business stack of AI, whether it's a firm like mine that's building a vertical solution within the diligence domain, or you're trying to build a horizontal solution, right? Like a Gleam is trying to do. Yeah, yeah.
Or Microsoft's trying to do with Copilot. So, my perspective of the world today is, hey, from 2000 to 2022, the gift that kept on giving in venture markets was application software. Yeah.
And then the subsegment within application software that also kept on giving was vertical SaaS. So, I actually take Salesforce as like a point-in-time example of what your classic enterprise software workflow was, and that's kind of stamping it out. Let's call it 1999 or so.
And then people use that as a template to go attack different industries. So, you have Viva, you have Service Titan, you have Procore, you have Toast, you have Shopify to a certain extent. I think that same kind of race to build out the application layer is occurring in the age of AI as well.
And people are taking that verticalized approach. You're focusing on diligence within private markets broadly. Others are focusing on legal services.
I heard for the first time yesterday, someone focusing on the oil and gas field. And people are really pushing the frontier of this. I would be curious to get your perspective in terms of what do people misunderstand of what to build at the frontier of the application layer?
Yeah. So, I think what people miss, and I think this is the hardest thing for corporates to solve for because I actually don't think there's enough actual people who understand how to do it. The models changing is not an insignificant issue.
Like, in theory, the models have a certain amount of kind of backwards compatibility, but they also make leapfrog events. And so, that thing that you tried to do in ChatGPT three months ago suddenly just works. Or that thing you were doing and it worked amazingly well three months ago, you now have to do something different to get it to work, right?
Meaning how you generated an image or how you created a table or how well it cited sources or how it interacted with the web. So, as the models get smarter and they have more tool capability, like how they can interact with the world, that is a very hard thing to continuously build to on an optimized basis. Meaning, if you really want to get the biggest bang for the buck, you've got to really be leaning in on the very front edge.
And you're also leaning in a very different way than technology ever got deployed before, right? Every other technology platform that we've just talked about and the SaaS stack is designed around structured data and very deterministic outcomes. You run a Salesforce report, it's going to look like X.
You put your data into Workday, it's going to look like Y. You're going to use a financial reporting system like SAP, it's going to do something repeatable every time. So, not only are you building to a series of models that are getting smarter and adding other capabilities, but you have this whole additional component of non-deterministic outcome that you're trying to work through.
I don't think non-deterministic outcomes are bad as long as you can document and evidence why it might look different, right? You can ask the same question of three different people in an office and ask them to put together a presentation. They're going to come up with different formats for the tables and the graphics that they're going to include.
That's okay as long as you feel confident in the source material and their ability to validate it. So, I think building tools on models that are constantly changing with this underlying change that's constantly moving, it's really hard when it's non-deterministic at the end, right? We'll get an output and someone will be like, why is it doing that?
And I'm like, I don't know, models are weird. It worked fine in the last model, they clearly changed the guardrails, and so now we have to change our guardrails. Well, even they're not fully sure in terms of what's working.
[Ed Brandman] (53:03 - 53:03)
Well, right.
[Desmond Fleming] (53:03 - 59:33)
There's the emergent component of they can't consistently explain why it does what it does, right? Why when I fed it in, I was helping a firm that was involved in the cell tower business. They had a lot of cell towers that they owned across the country.
And we fed the spreadsheet in with a lot of the cell tower information, and we asked it for a visualization. And all of a sudden, it came back with a visualization that literally looked like a map of the United States where all the towers were located. Why did it create that?
And by the way, it turned out it wasn't a map. It was actually a scatter chart that it created, but because the cell towers were so pervasive on borders of states and visually, I thought I was looking at a map when I was just looking at a scatter chart that actually represented a map of the United States. And so when the client says, well, what else can I do with it?
Sometimes your answer is, I'm not sure. Push the limit. See how well it can do this.
Because when I think about it as, hey, they're just trained on the reference data, right? Do you think in that example, the reference data contained a lot of scatter charts? Yeah.
I mean, maybe, right? I mean, look, there's clearly things models can't do, right? Ask a model to draw a picture of any clock with the hands at anything other than 10-2, it's literally impossible, right?
I mean, something as simple as that, right, still doesn't work. And yet it can draw a scatter chart from geolocations on cell towers, right? So I don't think the model providers are ever going to be transparent.
I think partly it's proprietary knowledge for them, like, what are they training on? How do they make it better? It's obviously not just what are they training on, but how are they training the models to get smarter, right?
You could have the same set of knowledge. And if you look at that knowledge differently, you acquire new capabilities. And so I think that for people trying to build, it's a very non-deterministic build.
You're building for the future and you're kind of rolling the dice. And even with all the benchmarks that are out there on how well these models do on solving certain math problems, taking certain types of tests, solving certain types of puzzles, that's great conceptually, but most of those benchmarks don't actually give you great data on how well these models deal with RAG-based systems from a content standpoint. So, you know, look, I tell our clients this.
When clients ask me, how do we benchmark? I'm like, well, we have about 350 questions from quantitative to qualitative that we rate and evaluate for accuracy and relevancy. And they said, well, what else do you do?
I said, you, and they're like, what do you mean? And I said, you're part of our evaluation. Like you are going to do things with your documents.
You are going to ask questions that I could never envision. And you're going to tell me whether you're satisfied with the answers or not. I just can't.
It's one of the coolest things about AI that like your imagination is the limit. I've said to multiple people, it's the art of the possible. I saw something in your offices about it's only impossible until somebody proves otherwise.
I think that's true of AI. I don't think most people really know where the boundaries are. And I think most of us who are even down the rabbit hole and in this every single day, even the most experienced people are on a regular basis are saying to themselves and their colleagues, I had no idea I could do that.
One, I think it's about how do you become reflexive with your AI usage to the point where you break the habit of the past 20 years of technology and the internet, which is not easy. Oh, let me go to Google. I think a lot of it is also reframing your expectations around the, how long it takes to deliver something.
We're so used to these instant feedback loops and AI is not necessarily great at that, as well as AI is not necessarily always great at taking the outputs that it can generate instantly and then connecting it to other tools and systems that could then give you what you want. Unfortunately, I don't think we easily break that habit. And so I think what the leading edge firms are challenged by is people still want that instant response.
They want the benefit. I mean, do you even think about traditional social media platforms that did lots of A-B testing as part of the, why do I suddenly see this differently than everybody else sees that? That was fine when you also had deterministic outcomes.
So how do you even do A-B testing when you actually don't know what the outcome is going to be? It's much more challenging and you have to be very comfortable in a fluid iterative process where on some days things are going to work and other days they're not. The firms that win need to make a fundamental bet that the models are going to keep improving.
My view is at any given point in time, the models are as dumb as they're ever going to be. What is your advice for like a college student today coming into the workforce? If you aren't experimenting with AI every day in some way, shape or form, that can be 15 minutes, you're failing.
I think that nobody's getting a job in two years time where part of the interviewing process isn't just about how you've used AI, but maybe in the interview showing you in real time, how do you get, how do you solve this problem? And regardless, obviously we were talking about private equity, but you think that applies regardless of role? I think it is regardless of role.
Do I think people who are going to be surgeons need to worry about this? No, for the time being. Actually, funny enough, I was with a great friend of mine.
He is in his fellowship for heart surgery and he says, I use it all the time. Not for actual surgery, but he uses it all the time. How do I research this?
Can I take those test results and put those test results into a secure chat bot and see like how it evaluates? My message to college students is, and it was funny, I asked them all to raise their hands if they had used ChatGPT in the last year while they were at college. 100% of 60 kids in a room were doing that, right?
And so it's here, it's happening, it's a part of it. And so don't just use it as your Google search, like push the limits. You want to write your own app?
Have it help you learn. I had an intern working for me this summer. He's still with us.
[Ed Brandman] (59:34 - 59:35)
Good intern.
[Desmond Fleming] (59:35 - 1:00:11)
Good intern. Never wrote a line of code in his life. Working on automation of how do we onboard new users.
Shows a demo to the entire team and there's clearly CSS code that like has been written that helps augment this. And I was like, Jared, you don't code. He's like, no.
I talked to one of the senior engineers, they told me the things to look out for. I use ChatGPT to help me write the code. I cut and paste the code in.
I put it in for a pull request. He's like, I've never done engineering before. That's unbelievable.
[Ed Brandman] (1:00:12 - 1:00:12)
It's fast.
[Desmond Fleming] (1:00:12 - 1:01:14)
I had the same exact experience. I only knew HTML and CSS. Actually very similar where I talked to an engineer, actually was a designer, like, hey, I need to get some project done.
What is this stuff? He told me and I'm like, okay, I can learn that. But never deeper, nothing in JavaScript, nothing in TypeScript, nothing in Python, none of that.
And through using Cloud specifically, I've been able to create three or four websites and apps, both deployed locally as well as hosted on Vercel. And I've never had that skill in my life. Right.
We're, the guy who leads product for us uses AI models in his design. He literally have it build tailwind code and give us the visual on the HTML. Right.
And so I think college students broadly, to just close this topic out, they have to accept the fact that one, models are really smart. In certain fields, the models are already smarter than human beings are on a lot of time.
[Ed Brandman] (1:01:14 - 1:01:15)
Yeah.
[Desmond Fleming] (1:01:15 - 1:03:28)
But that shouldn't phase you and you shouldn't be fearful of it. You should look at it and embrace it as this is going to augment me. It's going to improve how I understand the world around me, the product I'm selling in marketing, the company I'm going to work for, who my customer is, the hike I want to go in, I'm in hospitality, I can look up information on competitors.
The power that it will provide to people, I think just helps everybody rise up. I think if you bury your head in the sand and you're like, I'm the smartest person in the room, you're not going to be the smartest person in the room if you're not figuring out how to leverage AI. And that doesn't mean you should just do it without fact-checking.
It doesn't mean that there's lots of creative things people can still do amazingly with their hands that AI can't. But why aren't you trying it every single day? Yeah.
It's almost like there's a few analogies. You could say it's like, hey, try doing an LBO without Excel today. You'd be like, that's crazy.
Or for other people, try and find a job without the internet today. You'd be like, that'd be stupid. Yeah.
Okay. Same thing applies here. Use the AI.
I was telling the students I spoke to yesterday, I said, look, in the 1970s, calculators were introduced as a way to do math. And there were professors who were like, no calculators in my math class, no calculators in my science class. And then, until computers came along, HP calculators were our best friends in college and then ultimately in high school.
And then they just became a part of how we did things. Laptops became a part of how we managed information. And so, to me, AI is going to be this companion that we've got on our phones, we've got on our laptops.
And look, me personally, I probably use it 20 times a day. I also find, to your interesting point, I do think it's going to change the entire nature of search. I mean, I took a polling, I have 27 people in my firm, our Google search is probably down 80%.
We search through ChatGPT. So, it's down 80% or it's 80% of share? I mean, we're literally using it 80% less.
[Ed Brandman] (1:03:28 - 1:03:28)
Yeah.
[Desmond Fleming] (1:03:28 - 1:05:43)
Wow. That's dramatic. That's crazy.
Yeah. Right? And so, it also, because I think we, as a team, we're evolving how we think about search.
Search isn't just about bring me to a link. Search is about explain this to me, give me three different options. Those are things that are very hard to do in a traditional search architecture, regardless of the fact that Google has literally indexed the world and it's a great tool.
I'm actually going to take that. I've never asked the model, give me three different options for one thing. I'm going to take that.
So, I do, I think you're seeing that change. Look, I think it's going to change the nature of how companies think about, like the world of SEO will not look the same, right? SEO and paid search, well, how do you think about a world where there isn't advertising or people are paying for a 20 month, because that's not going to be everybody who's on the platform, but all of a sudden, how you think about your website and your content and getting discovered changes?
What's the agent first version of the internet? Right. And what's interesting about that is that is non-deterministic.
It's a little bit of a black box. And so, I'm not sure consistently the model providers themselves could tell you, why does it source this source of information versus another? Do they have an internal ranking of trusted sources, right?
Is a corporate website ranked differently than Wikipedia, ranked differently than Reddit? I mean, I don't know. Yeah, but volume matters, right?
Going back to the clock example. Absolutely. I think college students should really embrace it.
I think it doesn't matter if you're going to work at PwC or a private equity firm, or you're going to go work in sales at Walmart. Any type of job is going to benefit from this technology. And maybe to wrap things up, Ed, this has been great.
Thank you for taking all the time. I want to wrap things up with talking about both your experience with entrepreneurship today, and advice you would give to just other founders who are either building at the application layer or building in capital markets. And where I'll nudge the question to start is, why come out of retirement?
You had a great run. You did a lot of stuff.
[Ed Brandman] (1:05:44 - 1:05:44)
Why do that?
[Desmond Fleming] (1:05:44 - 1:06:06)
I have a simple answer for that, which isn't going to satisfy your listeners. But I came out of retirement because my oldest son was like, I'm going to help you figure this out. Had he not helped, even with the greatest friends I've got, I probably would have stayed in the woods.
But whenever your three kids comes to you and says... So that was more of a personal thing?
[Ed Brandman] (1:06:06 - 1:06:09)
That was very personal. You're like, I get to work with my son.
[Desmond Fleming] (1:06:09 - 1:11:21)
To be clear, I was a happily retired guy. I was not bored. 27 years on Wall Street, after five years, I was still decompressing.
Yeah. So it was a very personal decision for me. That said, once I made the decision to go all in, if we pivot to the advice side of it for other entrepreneurs, look, I would describe myself as a very non-traditional entrepreneur at 57.
I'm not a traditional startup guy. I come out of industry. That is my secret sauce, is my relationships and my knowledge of the industry.
You know the CTOs, you know the CIOs, you know the workflow, you have people you could hire easily. Shortcut, shortcut, shortcut, shortcut. And I would actually, though, say most importantly, I lived in the business.
I was very fortunate, whether it was my trading in capital markets and ops career at JPMorgan, whether it was my electronic trading and trading markets at Roberts and Stevens on the West Coast, or my job as CIO with operational responsibility at KKR. I've always loved the business side of the technology problem I'm trying to solve. And I was very fortunate that at every single turn, people let me go deep into the business, right?
How do trading and capital markets work? How do trades settle? You know, how does electronic trading and retail order flow play into how capital markets work?
At KKR, meeting with LPs, what are they looking for? Talking to the CTOs of our portfolio companies and talking to the investment professionals who are sitting on the board and they're looking at that. Being invited to the diligence table and getting a chance to do diligence.
I mean, I was very fortunate that that knowledge was built up. I think for us as a firm, our moat is built around the fact that the technology is being built off of the insights of a large number of people who work at our firm who come out of industry. They're actually not serial entrepreneurs.
They're people who have worked inside private equity, private markets, private credit operations. So they felt the pain of what life is like in a pre-Gen AI world. And so I think it puts us in a much better position to relate and have empathy for what investment professionals are doing.
I mean, the interesting challenge that we have as a firm is the vast majority of private equity, real estate and infrastructure investors, private credit is a little bit different, but the vast majority of those investment professionals have broadly speaking, not used sophisticated technology beyond the boundary of Excel, email, email, CapIQ market data, Bloomberg and the credit markets and other data sources. They're not users of traditional SaaS.
That's for operations and the data people and the compliance people and the finance team to use. And that's really good. And so earlier in our conversation, when I talk about getting the user experience right, I'm trying to get the user experience right for a bunch of people who are very detailed in the work that they do, have a highly focused flexible platform in Excel that they can make do anything, plug in market data tools that allow them to do that.
That is a very different type of user experience to build for. And so I think anybody who's out there, look, whether you're focused on financial markets or otherwise, I do think that getting this user experience right is a very big deal. And I think it's very different than most other tools.
I think we are going to have to experience an iPhone moment when you were like, yeah, look at all the things I was willing to trade and give up on an iPhone, because the iPhone's contact manager and calendar manager isn't the best tool in the world. But it was so easy to use, it became indispensable for us. Right.
Their note taking was not the most sophisticated and what you could order and sort. Yeah, but it was like they were excellent at like three things, I think connecting to the web, the phone and maybe the camera all along. And so I think we all have to be mindful when we think about where AI can go.
One, we should be very careful about not letting things get overhyped. And two, really listen to your customer. Don't build it in search of a problem.
Know that the problem exists, empathize with the users and help them solve the problem together and be prepared in the world we're now living in to iterate rapidly on your platform, both from a model standpoint and a user experience standpoint, because I used to feel confident that I had a two to three year roadmap on the tech stack when I was a CIO, CTO. In the AI world, I feel like I, best case, I've got a three month view. Yeah, I was gonna say.
And I'm wrong 50% of the time. Yeah, yeah. It just changes so quickly.
Well, Ed, let's end there. I so appreciate you for coming on to the pod. I think this is going to be episode seven or eight of The Longest View.
[Ed Brandman] (1:11:21 - 1:11:23)
Congrats on that, that's super exciting.
[Desmond Fleming] (1:11:23 - 1:11:26)
We got some momentum and yeah, thanks so much.