Beyond the Prompt

Ever wonder how massive sales forces at companies like Microsoft, Salesforce, or Databricks manage to consistently hit their targets and understand complex customer needs? A huge part of the answer lies in sophisticated, AI-driven insights, and today's guest is right at the heart of building that technology.


Today, I'm joined by Frank Wittkampf, Head of Applied AI at DataBook. DataBook is a platform designed to supercharge enterprise sales productivity. They don't just offer generic AI; they build deeply specialized systems that analyze vast amounts of data – financial reports, news, competitive landscapes, even proprietary insights – to tell salespeople exactly what to position, why, and when.   


They are moving beyond simple chatbots or free-form AI agents. DataBook focuses on applied AI, using what Frank calls 'guided reasoning' to ensure the insights delivered are consistent, reliable, and directly drive sales outcomes, like significantly increasing deal sizes.   


In this episode, Frank dives into how DataBook's AI works, why a 'guided' approach beats pure agentic systems in enterprise, the surprising challenge of people over-imagining AI's current capabilities, how they navigate the R&D frenzy to deliver real value, and their vision for a future where AI proactively coaches you.


Takeaways
  • Why "Guided Reasoning" Beats Pure AI Agents: Enterprise needs predictable, repeatable outcomes - not creative exploration
  • The Over-Imagination Problem: Why Computer Use and other flashy AI features aren't ready for enterprise deployment
  • Data Strategy That Works: How Databook combines public data, proprietary datasets, and pre-solved analysis for instant insights
  • R&D vs Reality Balance: Practical framework for experimenting with cutting-edge AI while delivering customer value
  • The Future is Proactive: Why the next leap in AI isn't just responding to queries, but actively coaching users
  • Enterprise Integration Challenges: Real talk about data access, security approvals, and building trust with large customers

Sound Bites
  • "Free reasoning is all fun and great, but in enterprise, fully free reasoning is just not that helpful."
  • "Computer use is incredibly useful... The problem with it is it's just so non-practical at the moment. It is incredibly slow."
  • "For AI to deliver you a proper answer, you actually need to pre-solve that answer pretty thoroughly if you want to do a good job at it."
  • "We can see deal sizes increase by 1.9 to 2x when people are actively using this."
  • "The big change that's coming in AI is not just you engaging with it, but it engaging with you and helping you."

Chapters
00:00 - Introduction to Databook and Enterprise AI Reality
03:08 - What is Databook? Serving Microsoft, Salesforce & Databricks
04:33 - AI-Native Features: Beyond Simple LLM Implementations
06:17 - Customer Deep Dive: Why Big Tech Companies Choose Databook
09:18 - Proprietary Data Strategy and Pre-Solved Analysis
11:03 - Day-to-Day as Head of Applied AI: Product to Engineering Translation
14:21 - Balancing R&D Innovation with Customer Results
18:58 - Testing and Experimentation in Enterprise AI
21:14 - Dogfooding: How Databook Uses Its Own Product Internally
23:24 - What's Next: The Push Toward 4x Deal Size Increases
25:12 - Guided Reasoning: The Middle Ground Between Workflows and Agents
26:19 - Biggest Roadblocks: Enterprise Speed and Data Integration
27:49 - Technical Deep Dive: Delta Lake and Joint Data Access
30:07 - What Frank is Most Proud Of

Connect with us
Where to find Anthony:
LinkedIn: https://www.linkedin.com/in/wittkampf/
Medium: https://medium.com/@frankw_usa
Website: https://databook.com/

Where to find Sani:
LinkedIn: https://linkedin.com/in/sani-djaya/
Get in touch: sani@gridgoals.com

What is Beyond the Prompt?

This is the show where we go deeper than the hype. Where we go beyond just the prompt. On the podcast, we talk with product, engineering, and GTM leaders who are building AI-native products and using AI to supercharge how their teams operate.

If you’re looking to scale your business with AI or want to learn from those doing it at the frontier, then you’re in the right place.

Frank Wittkampf (00:00)
computer use is a it's a fun example computer use is incredibly useful

We're obviously experimenting also with like, can we have something to scour LinkedIn for you using all of your own context and stuff like that. The problem with it is it's just like so non-practical at the moment. Like it is incredibly slow. It needs your login. you can't get to places anymore. like practically like implementing that in like an enterprise setting is like highly complicated,

we can see deal sizes increase by 1.9 to 2x of what they are when people are actively this.

our biggest goals are like, how do we even, like, how do we double those and how do we double those again? So like we're very outcome-based and that's like getting deeper with our current customers.

free reasoning is all fun and great, but in enterprise, fully free reasoning is just not that helpful. For research and exploration, free reasoning is great.

like networks of agents are not gonna deliver it and a fully...

hard-coded set of flows is absolutely not like the modern way of solving things. Like the way somewhere in between that gets me a well-defined repeatable enterprise output. That's kind of the direction that we are pushing our agentic reasoning in.

Sani Djaya (01:03)
Yeah.

Sani Djaya (01:14)
Hey everyone, welcome to Beyond the Prompt and I'm your host, Sunny. And this is the show where we go deeper than the hype, where we go beyond just the prompt. And on this podcast, I talk with product engineering and go to market leaders who are building AI native products and using AI to supercharge how their teams operate. If you're looking to scale your business with AI or want to learn from those doing it at the frontier, then you're at the right place.

If you're interested in coming onto the podcast or just want to chat with me on the cool things you're doing with AI, then check out the link in the description to get in touch.

Are you wondering how massive sales forces at companies like Microsoft, Salesforce, or Databricks are leveraging AI to consistently hit their targets and understand complex customer needs? A huge part of the answer lies in sophisticated AI-driven insights, and today's guest is right at the heart of building that technology. Today, I'm joined by Frank Woodconf, head of applied AI at Databook, Databook is a platform designed to supercharge enterprise sales

productivity to the next level. They don't just offer generic AI. They build deeply specialized systems that analyze vast amounts of data, financial reports, news, competitive landscapes, and even proprietary insights to tell salespeople exactly what to position, why, and when. They are beyond simple chatbots or freeform AI agents. Databook focuses on applied AI using what Frank calls guided reasoning.

to ensure the insights delivered are consistent, reliable, and directly drive sales outcomes, like significantly increasing deal sizes.

In this episode, Frank dives into how data books AI works, why a guided approach beats pure agentic systems in enterprise,

how to manage the challenge of people over-imagining AI's current capabilities, how they navigate the R &D frenzy to deliver real value, and their vision for a future where AI proactively coaches you. Let's get into it.

Sani Djaya (03:08)
Frank, thank you so much for joining me. I'd love to have you share a little bit about what DataBook is.

what the company does and your kind of role as head of applied AI at Databook.

Frank Wittkampf (03:20)
Of course, thank you for having me on. so, DataBook is a platform that enables salespeople at enterprises to sell. We basically help increase the productivity of large sales forces. So the types of customers that we're serving are like a Microsoft, a Salesforce, a Databricks, and larger companies like that. These companies have...

tens of thousands of salespeople and if you're trying to manage and ensure that people are productive, sell to a consistent level and use the right insights, that's the type of stuff that you use our product for. And then my role, the company leading Applied AI means that anything AI reports up to me. Our goal is to build systems that actually deliver AI outcomes.

not just like, let's connect some fun agents together, but like, how do we actually deliver something to a salesperson that allows them to be more productive or better informed or work in a way with their customers that allows them to sell better.

Sani Djaya (04:19)
Nice. So I'd love for you to dig more into, yeah, what are some of the AI native features that you have in data book? How are you able to deliver new value with LLMs and agents that we couldn't have done before LLMs?

Frank Wittkampf (04:33)
Sure. I think what's interesting about Databook as a company was founded in 2017, so been around for a little while. And the platform basically has a lot of layers of insights that are built into the core of it, even from before AI. So what's great for my team is we have a lot of material to work with to be able to deliver a decent answer.

What the core of it is, is like if you ask a question like, okay, I would like to prioritize my accounts and figure out who I should sell to, or hey, I am selling to pick some company you're trying to sell for. Like let's say you're at a big tech company, you're trying to sell to Nike. Like what should I position for Nike? The thing you don't do is just have an LLM and throw a bunch of data at it and.

have it execute. The thing you do do is say, okay, given that what I know about this customer or this list of accounts, like what are the sets of steps that I should take to resolve this problem and then apply that problem solving in the right order such that you can come to a decent answer. So a lot of the work we do is how do you guide either an AI agent or a workflow to a certain outcome?

and do that in a way that makes sense from a sales methodology perspective. And that's a little less straightforward because if you just connect like a set of agents together, then it's ambiguous what it might end up at. And in enterprise, you often want to deliver versus a very consistent outcome. So you need to have a more guided type of a setup to be able to do that. Does that make sense? Did I miss anything you wanted to know there?

Sani Djaya (06:11)
No, that was that was good. And then I don't think you mentioned it, but who are like some of the customers for data book?

Frank Wittkampf (06:17)
Yeah, so Salesforce is a main customer, Microsoft, Databricks, there's a set of other large companies, but I'm not allowed to disclose all of them. But there are companies with very large sales forces that have like tens of thousands of sellers. And yeah, we're basically then a tool that those sellers can use within those companies.

Sani Djaya (06:25)
Yeah, no worries.

Yeah, and it's pretty incredible because Salesforce and Microsoft, definitely touting externally that they're like an AI forward company. And it seems like you guys have figured ways to deliver value to them that they haven't done internally and by creating their own tools to like deliver something similar. What do you think is like differentiated in in data book versus what they could have done themselves or like any other kind of providers of a similar platforms?

Frank Wittkampf (07:02)
First of all, I think they have amazing platforms and they allow you to look across a ton of your enterprise data, to run stuff across a lot of different systems. So I think their platforms are awesome. The thing is, is we are highly specialized on figuring out these sales processes. there's...

There's years of perfection on like, how do you run this type of a process? So for example, like when you do a, we often try to make sure that the seller strategically positions their product, right? And for that, actually need to understand the company that they are selling to. And to understand that company, you need to know, okay, like I mentioned Nike earlier, like how is Nike doing competitively versus Adidas and other shoe companies? You need to go pretty deep, like with a like almost like...

Wall Street like financial analysis to properly understand that. And we've got a bunch of normalization and comparison and put it in the right set. And now if these are weaknesses, then you should go look at like when were there stock drops and what news was there exactly around specific stock drugs, cetera, that give you like interesting information. like these companies have fantastic products, but

given our depth of focus, there's parts that we can solve more proper. And they're trying to solve for everything, and we're trying to solve for APs. And I think that's where a lot of it comes in. And then secondly, some of these things like you...

Sani Djaya (08:19)
Yeah.

Frank Wittkampf (08:27)
I think the industry is starting to realize that outside of working with agents, there's actually a pretty deep need for a combination of agents and workflows. And we've been working that a bit longer than most people have. And therefore, I think we can get to a significantly higher quality output than a perplexity pro or something like that might get you, or that a generic co-pilot implementation might get you.

Sani Djaya (08:36)
Mm.

Yeah. So it's like the, the like focus over the years and like being very deep and very intentional of like, we want to make sure for these specific use case. It's super high quality, like way deeper than anybody else would spend time on and, building a product for them. Is that right? That's awesome. That's awesome. Anything else in terms of like AI native features and data book that you want to share that's special and differentiated. It also sounds like there's a little bit of.

Frank Wittkampf (09:05)
That's right. Yeah. Yeah.

Sani Djaya (09:18)
you're grabbing a lot of public information and processing that. I'm also curious if you're also grabbing any kind of proprietary information and you don't have to share the sources, but like any specific sources where you're like, oh, like we have special access to that others don't.

Frank Wittkampf (09:33)
So there's a lot of data we use. A set of it is paid data. So there's an infinite amount of companies that could deliver you data. So imagine the following scenario. I, again, I'll just keep pulling on the Nike example. I'm a seller at a very big enterprise and I'm trying to sell something to Nike.

I might have a portfolio of several dozen products that I could sell to them. What matters is like, is this the right time to sell to this company? What has Nike bought in the past from us and from others? There's like technographic data behind that. There is the strategic types of analysis that I told you about, which is us, we have a large proprietary data set where we do cross-company analysis that we...

runs continuously in our backend that already sets up a lot of the answer that you will use to later use in a sales argument. the trick here is, and with most AI, is for AI to deliver you a proper answer, you actually need to pre-solve that answer pretty thoroughly if you want to do a good at it. So a lot of things we have pre-solved in our infrastructure.

And using and recombining that is where some of the magic actually happens.

Sani Djaya (10:52)
Awesome. So then tell me a bit more about your kind of day to day as a plot head of applied AI at data book. Like what does that kind of look like? What is like a typical week look like?

Frank Wittkampf (11:03)
Sure, so I just came out of discussion with...

of our company that's...

One of the main things we talk about there is let's talk through an actual customer use case. Let's to, okay, we're trying to make sure that we up-level this salesperson that is working in one of our customers' companies. How do we actually up-level them? What coaching do we give to that person? What types of information do we need to surface to them? How would we actually, from just without technologists purely looking at what their job is, how do we improve that?

Out of that, taking a set of requirements, just like these are like incremental things that we could put into our product. Like that's part of my day, which is more like the translating product management like things into like what this is actually mean for technology. Another part is like we have like an existing infrastructure and like everybody, like we dove into the AI side when the AI explosion happened.

and built a lot of infrastructure and the world is changing so rapidly that like a set of things that we've built, like we're replacing by better ways of doing the same thing, right? So a lot of my work comes down to like, okay, you've got this ever-growing body of business logic that you're trying to manage your AI flow or your problem solving through. What are the best ways to compartmentalize this?

all this business logic and information such that it stays usable and maintainable over time and such that we can recombine it into new experiments and new parts of a product that we can work with. So there's a lot of how do I make this, how to make these parts reusable, how do I make it into subparts that we can recombine into new solutions, and then what does that mean for architecture. And that's a...

I would say at least 50 % of my time goes to like, okay, like how do I translate this to something that we can build faster on in the future so that we can accelerate our product delivery that is, that's some real time.

Sani Djaya (13:01)
Yeah, definitely had lots of conversations of that continuously on our side as well. I'm not leading on the engineering side, but the engineering lead, he's like a VP of software engineering. He definitely talks about that a ton, which I also really appreciate because then he's always thinking about, we're using it for this use case right now. But I'm also thinking about how can I make it so that it's its own piece and reusable for other use cases as well, which.

super appreciate as a product person so that it's reusable for other teams as well, not just like my team.

Frank Wittkampf (13:33)
Yeah, I think the R &D element is very important here. Anybody that works in this space knows it's moving so fast. If you can't experiment and try new things, then you also can't get better than other people at doing something. So how do I allow as many people in my company to experiment with the pieces that we have requires you to build in a slightly different way than if you just have a product spec and you try to deliver against that.

So that simple things of, how do I make prompt engineering faster? How do I make it easier to recombine agents into different configurations? How do I make the composing of flows and outcomes such that a lot of people can work at it at the same time, and they don't interfere with each other, but still can deliver proper outcomes?

That's a lot of the fun of it.

Sani Djaya (14:21)
I'm, curious because I think, it's been interesting as, we're building AI products and agents as well, and top of LLMs. Part of it is a lot of people, product, engineers, and designers don't even know what LLMs are capable of or like what is even possible. And so there isn't like a strong sense of like.

what is possible, what is not possible. And then what is possible is constantly shifting because there's new model updates, things get cheaper, open AI release, computer use. Like how do you think about and have approached like this balance between R &D, but also like not spending too much time on the like doing too much R &D and too much engineering R &D that don't end up like driving customer results. How do you, how have you

figured out and find that balance of let's explore something because we think maybe we could do it, but then there's another one of like, we don't even understand this technology yet. And like, maybe we should just spend time like exploring the technology in general, right? How do you think about that?

Frank Wittkampf (15:28)
Yeah, I've got two pieces to that answer. One part is the thing you mentioned at the start is people not even imagining the possibilities that are there. I think there's a contrast to that is people completely over-imagining what it actually could do. And then the second part you ask about is the, how do I...

how do you balance the experimentation versus real? Like for us, the main thing that I do with the team is when we do a specific experiment or when we have a new piece that we want to do, like always couple it to a use case that allows me to deliver something that's incremental to what I currently have. So orchestration is like incredibly key.

in the AI world, obviously, I think a lot of your discussions will hit the orchestration point. Hooking ourselves to one way of orchestrating, I think, is wrong, because I know that I don't know the future regarding orchestration. So we have multiple ways of orchestrating within the company, and we're trying to, for each of them, say, OK, if we orchestrate in this way, what's the incremental business value that it would deliver us? And now let's take an actual business sales use case, because we're in a sales space, obviously. That.

I can now enable because of this and now, okay, this experiment is now tied to that use case, like show me that it can deliver that use case. And like, that point tied to like specifically orchestration, I think is very, very important because it's like such an ambiguous space. So like how that is going to resolve the like.

I have a few bets that are similar to each other on the orchestration space and I'm basically playing them out and trying to make sure that I don't over invest engineering time in both, but such that I do know I can quickly learn, I get a lot of extra value out of this new experiment that we're doing. Maybe I should start shifting my balance a little bit more to orchestration method number two instead of.

one and three that I was leaning on because this starts to deliver some more. But the main answer to what you just asked is I'd tie it to a use case. But going back to point one that you said, is, some people don't even know what, can't even imagine what comes out and it's hard to keep track as well. And on the other side, there's, I think, people that just over-imagine what it can do. I actually run into the people over-imagining it.

more than the other side because like I think there's a reason why my my title is applied AI like the applied part is like deliver value with this thing and there's there's like tens of thousands of salespeople like actually using my product on that that like when I change it they will see it the applied part is like that's great like computer use is a it's a fun example here right like computer use is incredibly useful so like the the

We're obviously experimenting also with like, can we have something to scour LinkedIn for you using all of your own context and stuff like that. The problem with it is it's just like so non-practical at the moment. Like it is incredibly slow. It needs your login. it's being slowly like shielded by people so that like you can't get to places anymore. like practically like implementing that in like an enterprise setting is like highly complicated, especially if you, if you.

to get through security clearing and all that type of stuff. So I think it's almost unhelpful sometimes that people have this idea of, it can do anything. Because if you apply it to an enterprise use case, practically delivering on that specific thing in the coming few months is going to be highly, highly complicated. Doesn't mean that we're not experimenting with, OK, how would this work? And can we do this ourselves? And can we practically deliver it?

but there is a real reality part to my job, which is like, if this thing doesn't actually deliver value to our customers very, very quickly, then let's focus on the things it does first.

Sani Djaya (18:58)
I'm curious on how you test and experiment it quickly.

Frank Wittkampf (19:04)
Okay, so we have a bunch of customers and...

Sani Djaya (19:08)
Mm-hmm.

Frank Wittkampf (19:09)
they're working with us a lot. So we have set of four deployed engineers that are helping our customers quickly go through sets of use cases. That is, our platform comes with pretty much all functionality baked in, but our customers come with like, couldn't you also do XYZ? And for that, we basically draw in a lot of extra work because we're trying to...

custom expand and then build that into our platform. So a lot of it comes from there. It's like, okay, there's like real pain points and user needs, like let's experiment along those. And then we try to couple like, okay, what is the, how would you use AI to be able to do that? And I think it's, it's helpful to take kind of like a customer focus on, on what do I do? Cause otherwise I can just be live experimenting in any direction. So that's, I think the key answer to what you're asking is.

What do we do? I think that the couple way to something that the customer wants is like the biggest thing. And then secondly, it's just like lowering the barrier as much as you can to experimentation so that like, like everybody loves to play with AI agents and loves to play with flows. like lowering the barrier to less technical people in my company to be able to play with that and come up with funds for themselves. So like we have our salespeople themselves actually like.

What's fun is we have a sales product so we can dog food it to our own sales people quite a bit. Allowing that experimentation so that people can just go ahead and we don't direct it is the other part.

Sani Djaya (20:30)
That's awesome. That's awesome. So you touched on it a little bit of, you're, you're selling to salespeople, but you have salespeople internally. And so there's a little bit of dogfooding. And you also talked about how you have, you're making, you're lowering the barrier of entry for allowing your team to experiment and play around and see what they can kind of build is what it sounds like. any.

anything in particular you want to share there of like what you've seen is like super impactful and powerful of like how your teams or how your salespeople have kind of like put agents together or you know use any other LLM tools to like deliver new value for themselves and then you realize you could resell it to to your users.

Frank Wittkampf (21:14)
Yeah, so our product has an interface in which you just, that's set of ways you can get to it, but one of the interfaces is...

AI type of interface like you would expect. Our salespeople are on that continuously. And what we have internally in our company is like of all of our prompt set and flows, et cetera, we have a specific space for every person in, let's call it like a store for every person in which they'll, they're

able to completely customize their own basically data book AI. So that means that even my CEO and others in the company have complete ability to change things. Even that would break it for them. They're able to play with that because it only affects their own instance. in that sense, we give people pretty deep reach into the platform as to what they can change.

But the fun thing is, when they touch it, it only affects theirs, it doesn't affect anybody else. The other thing is, anything that we roll out, we roll out internally first. So there's a heavy dogfooding and heavy experimentation there. Especially around, one of the things we care about a lot is having a full closed loop, which is you don't just ask our product things and then it comes with answers. No, we...

actively want our product to engage and coach with you throughout time, even if you don't engage and follow up on things that it has recommended and done for you. That is heavily used internally. That is going to our customers and some part experimentation there. But that's for us, of like really the...

The big change that's coming in AI is not just you engaging with it, but it engaging with you and helping you. And that's really our big next leap forward.

Sani Djaya (23:02)
Yeah, being proactive that I think is also a trend I've been hearing as well for sure. 100%. Awesome. So, you know, what's next for data book? know, data book has been around since 2017 has some pretty nice clients as well. Like what's next for the next six to 12 months for data book? Beyond what you just shared in terms of proactive, right?

Frank Wittkampf (23:24)
Huh.

Yeah, mean, some of my answer there is going to be relatively generic. It's like, look, we have some amazing, incredibly large customers. And going deeper with them is always a large part of our goals. We are enabling things, but the amount of things that we can still build out for them is just endless. So we can get so much deeper.

Everything for us gets really set into a what are our customers' results. we, for example, right now mostly we can see deal sizes increase by 1.9 to 2x of what they are when people are actively this. We help people generate extra pipeline.

our biggest goals are like, how do we even, like, how do we double those and how do we double those again? So like we're very outcome-based and that's like getting deeper with our current customers.

Then like we have a very heavy, like big enterprise focus and we're basically expanding towards like, can we, like how many more customers can we get to growing from the really big ones?

like to the slightly less big ones, et cetera. So we have kind of a top-down market approach with a little bit Apple-like. Let's come with a very high-value product and grow down. And that's really what we're pushing on. a lot of that is in growing our product deeper. So that's the main focus.

Sani Djaya (24:46)
Yeah.

Frank Wittkampf (24:49)
What that means from, obviously you want to know more AI things, what it means for us from an AI perspective is free reasoning is all fun and great, but in enterprise, fully free reasoning is just not that helpful. For research and exploration, free reasoning is great. Guided reasoning is like, there's a difference between comp-

Sani Djaya (24:49)
Yeah.

Frank Wittkampf (25:12)
completely declared reasoning, completely free reasoning and guided reasoning and like the guided reasoning of like we get you to this very specific outcome we know to get you to, but the steps in between are not necessarily fully known. Like that's the part where we're pushing pretty hard because just like networks of agents are not gonna deliver it and a fully...

hard-coded set of flows is absolutely not like the modern way of solving things. Like the way somewhere in between that gets me a well-defined repeatable enterprise output. That's kind of the direction that we are pushing our agentic reasoning in. It's like, okay, I can get you to a very reliable output, but I get it through still a set of free reasoning and how do you balance that?

Sani Djaya (25:39)
Yeah.

Gotcha. Gotcha. like your, I didn't even think about it, but like you're right. I definitely agree. There's it's somewhere in between. It's, it's not either end of the work, either workflows are like completely free. What's, what do you think is your biggest roadblock as a company in terms of reaching that next six to 12 months vision?

Frank Wittkampf (26:19)
I mean, think when you play the enterprise space, the working with data and data that is like deep proprietary data of large companies, like we do a lot of that, but the length of time it takes to get through like all of the approvals, et cetera, that you do is definitely...

Sani Djaya (26:47)
Hmm.

Frank Wittkampf (26:48)
those like if you're an enterprise then you also deal with enterprise speed on decision making and that is that is slower than if you go to to startups like those things are always interesting what we see there is like we we see some really cool developments on using I don't know for example what Databricks does is like

together with Databricks being able to set up structures such that you're not exactly sharing data, but still being able to access joint datasets is incredibly useful for us. There's real fun work there, but those are also big hurdles. So we're deep in people's CRMs, but there's so much more that we can do. So that's kind of

Sani Djaya (27:28)
Yeah, cool.

That's awesome. That's awesome. I am curious, what, can you share a little bit more of what Databricks does? I'm not familiar with Databricks does in terms of, it sounds like there's a little bit of, actually, I'm not even sure. Like you tell me, like what is Databricks doing that's like interesting and being able to like set it up for their customers.

Frank Wittkampf (27:49)
Well, so if you, I think it's called Delta Lake. So if you basically set up a data structure that you have joint access to, but you don't have the ability to actually replicate the information that's in it, you can do fun, agentic work without ever data leaving your customer's premise while you can still operate on top of it.

There's different ways you can approach that, the premise of... Another example is if you work with a lot of customer data, there's lots of tools that allow you to ingest or connect to almost anything. The proper way is to make API use of all these things so that people can just like...

clay and say, like, hey, this is the access you have right now. When I lift it, you're out and there's no remnant of my data that you have. So whether you ingest or have API access or do a Delta Lake setup or you use other structures to solve this problem, I think the...

What matters there is in any of these setups, there's a lot of trust that you need to have with your customer.

For us, it goes kind of on how long of a relationship do you have with this customer and how much value have they seen you deliver. At some point, some executive says, hey, this is enough value that I'm willing to share XYZ data with you. And the dynamic that we're seeing is with these very large customers, we're quite deeply integrated with them. As you deliver, people are willing to start to take it a bit more.

Risk is the wrong word, but are willing to more deeply integrate and work together with you because it's just so valuable.

Sani Djaya (29:38)
Yeah.

Yeah. Yeah. Yeah. mean, like snowflakes, a company where like you literally store all of your data in. Right. And so like, there's a point where they're like, no, I'm not going to run my own data centers. I'm actually just going to pay this company to store all of the data of our users. Right. For example. Cool. All right. I have a, I have a last few, some last few questions to close it out. so this is my favorite question to ask folks, which is.

What are you most proud of and why? And it can be personal or professional.

Frank Wittkampf (30:07)
I get a lot of my satisfaction from how my team works and how interaction goes. Everybody has different motivations for their work. When I worked at McKinsey a long time ago, we had a framework for like, okay, within a work force there's actually...

a set of different motivations that people have. Like some people are financially motivated, other people are like really motivated by like I do, I help the world, others by pleasing others, et cetera. I'm, my motivation is very much like if my team is like highly performant, like that makes me incredibly happy. So the thing that I'm most proud of is like,

I came into this company as kind of like an outsider where there was a set of AI initiatives going on. And now, all of a sudden, the team has a person that says, OK, this is the direction we're going on, going in with AI. The thing I'm very proud of is I'm with.

with the team basically came to the conclusion, like, hey, we need to start building alternate infrastructure, but how do we actually build it into our current? And we found a way to mix this agentic and workflow work together in a way that delivers higher quality outcomes than we had before. And the fact that we like...

What often happens is someone comes in and is like, okay, let's just go start build everything new or like, okay, we're just going to keep extending the thing we have. Like the fact that we figured out a way to basically take our AI approach, blend this new approach in of like, how do you work with like complicated agent networks in a workflow set up that gets you to more deterministic outcomes while still having...

like free steps in between that took like a real amount of problem solving and a lot of back and forth and emotions were like going high and lows and they're like, no, we can do this. Yes, we can. Like for me, like the biggest satisfaction of like this, this period at a Databook for me has been like being able to resolve that in a way that like builds on and with our system.

but still takes this new view and new vision of the future, think is highly satisfying. And it is fun because I can see when I read a lot online, obviously, about what everybody's doing, and I feel like we're on the edge of where people are. We're doing things internally, and then at some point later we'll see things described online. We're yeah, think we're headed in the direction that we're general the directions are going. That's super fun. It's really fun.

Sani Djaya (32:40)
Yeah.

Frank Wittkampf (32:44)
like you're like on the edge of something where things are being figured out.

Sani Djaya (32:45)
Yeah. Absolutely.

Yeah, yeah, that kind of like push-pull struggle of trying to figure out something that's kind of vague and then figuring it out and then you're realizing other people are figuring out around the same time or even ahead of it as well. And it looks like, you guys have built an incredible team to be able to do that and very principled as well in terms of how you guys thought about it as well. All right, last question. Where can people find you online?

if they want to learn more about you or DataBook and how can listeners be helpful to you.

Frank Wittkampf (33:19)
Yeah, so my name, Frank Witkamp, like my last name is like ultra complicated. So there's, are two people in the world with my name. So like when you Google me, it's like pretty easy to find me. And the other person is actually a far away related family member too. So like there's, it's pretty easy to find me on LinkedIn at Frank Witkamp. It's pretty easy to find me on.

Sani Djaya (33:35)
You

Frank Wittkampf (33:42)
medium, I write some AI pieces on there. Towards data science, I like them a lot. then, yeah, I'm also to be found at a lot of AI events in the Bay Area because I like to hop around and see what people are working on. But any other thing you were aiming at?

Sani Djaya (33:57)
Nope, nope. But I didn't know you wrote for Towards Data Science. I know that is a very highly respected medium publisher, especially in data science, for sure. All right.

Frank Wittkampf (34:07)
I love

them. They make fantastic stuff.

Sani Djaya (34:10)
Alright, how many articles have you written for them?

Frank Wittkampf (34:13)
I think I published three with them, something like that. I don't want to overemphasize the importance, but it's super fun, I really enjoy it, it's a good way for me to reflect on what's going on and take that back and figure it

Sani Djaya (34:17)
That's a good bit. That's a good bit.

100 % 100 % all right Frank. Thank you so much for your time

Frank Wittkampf (34:35)
You too, Sony, really nice to talk to you.

Sani Djaya (34:36)
Thank you for tuning into this episode of Beyond the Prompt.

If you enjoyed this discussion, please subscribe to the podcast so you don't miss future episodes with other leading experts in the AI space.

Also, if you could take a moment to rate and review the podcast, it would help me tremendously in reaching more listeners and bringing you more great content. Until next time, keep going beyond the prompt.