Make it easy to get started. Make the easy things easy. Make the hard things possible. Don't let the developer get blocked.
Jack:I'm joined today by Tuan, the CEO and cofounder of Base Ten, an inference platform that just announced the $150,000,000 of series d funding. Tuan shares a really interesting origin story and how they've been able to capitalize on the huge wave of Gen AI, how to actually move fast, what does that actually mean? And he shares something quite interesting which is they have this internal motto to do anything that will increase inference. Yeah. I I would love to know the base 10 story because I know that you guys have been building stuff for a while and it's just been exploding lately, and I just love to hear the story.
Tuhin:Yeah. We started the company in 2019, and, yeah, it was we kinda started the company against this backdrop, the machine learning was gonna be a big deal. We didn't really know how, but we knew that we wanted to be kind of around it. So we were like, oh, let's build a picks and shovels business, you know, next to it. And, you know, for I'd say from 2019 to 2022, 2023, we were in our build phase, you know, you know, we were building along the right primitives that we need now, but we weren't seeing the demand from the market as much.
Tuhin:And then I think, you know, that that period was was challenging in a lot of ways. It it kinda wasn't in other ways. You know? Like, we were just having a good time building stuff. Like, it wasn't it wasn't bad strides for all.
Tuhin:And then, know, in 2023, the market kinda showed up, and we had to I'd say we had to, like, reposition slightly. Maybe hey. Like, people sometimes call it a pivot. It wasn't a pivot. It was more just like a hey.
Tuhin:We were we had, like we're going here, and you said, like, start going there. And once we did that, the last couple of years have just kinda been insane in a good way. So, yeah, the the the sound of story, I I think maybe, like, a good way to think about the company's kind of background is that they were around for, like, 10 to 20 people for three years. And then, like and then now we're a 100 and half around 20. So below that growth has come in the last eighteen months, I'd say, from, like, a headcount perspective.
Jack:Yeah. That's was it was it, like, immediately obvious? Was it just ChatGPT launched and, like or was it kind of a
Tuhin:slow I I don't think so. It was obvious that something was different, but it wasn't immediately obvious that we're on to something, if that makes sense. Like, you know, the even until, like, the '23, '23, there's a lot of demand from the market for sure, but the things are so fickle and the market was changing so fast and, you know, still very stressful in other ways. So I I I think it was like no. It wasn't like, oh, this is it.
Tuhin:I think, like, now we look at the market. I was like, oh, wow. We just we got pretty lucky and we found a great market and we're building the right things and we're focused on the right things. And now it feels a bit more like, oh, wow. It is it is something to lose.
Tuhin:But I think in the moment, it's it was like, my god. Today, like, yesterday yesterday wasn't there, and today today is there. It's like a, you know, incremental thing I'd say.
Jack:Okay. Okay. Yeah. Yeah. That that makes sense.
Jack:Yeah. And then, like, what what is, like, the biggest difference between, like I I guess it was like a build you said it was a build phase, but was there anything that became, like, things that were, like, more important than and then Yeah.
Tuhin:Yeah. Sure. Yeah. Look, I I think we're building something for data scientists running, you know, models for most of the internal flows, which is what the world like, the world was back then. And so we had this, like, application builder that they could, you know, build, like, front end applications where we had a model deployment piece that you could deploy a model from your from your laptop.
Tuhin:And then we had a we had this, like, serverless code piece where you could kind of stitch together your model without the code. And, you know, the the beauty of it was when you tie it all together. I think in the end the day, like, one and number three, just we killed, and then we just started focusing. We went all in number two. It is funny, though, but because, like, that that third piece, the serverless code piece, we're kind of doing another version of that now, which is, like, to run execution 10 boxes for LNs.
Tuhin:And they just take it's like, kind the same problem, just like a different same problem, different user. And so, like, I I'd say in the the soul wasn't that different, just like the the surface area was bigger. And now I turn the surface area is narrower, but the problem faces deeper with what we're working on. And that and that makes sense.
Jack:Yeah. Yeah. And then is, like, is what what is like the biggest challenge when there is this kind of like such huge demand like that you because I think not not many people have been in that position.
Tuhin:I think it's just like to keep up, right, and like keep the quality up and and think so very much to not like, we we want to satisfy all demand to some extent and I think the the biggest challenges are usually just around trade offs. If you do this, not gonna do this. If you're serve this customer, you're probably gonna serve that that other customer well. If you're gonna spend all your time recruiting product cool software, if you'll spend all your time with product cool recruiting and stuff. You know, it just kinda in always, like, those challenges aren't that different to like, you know, when you're when you're five people in the bottom and then and figuring out what to do.
Tuhin:It just you know, it's the what changes for what has changed for us at least is that the speed has just gone up a lot. Our customers are moving very, very fast. Like, you know, we work with some of the fastest growing air copies in the world, and, you know, they are they move very fast. We just don't wanna be the bottleneck to them, and we want to be, like, an accelerant for them. And so keeping up with that is just you know, it it it's hard.
Tuhin:It's it's especially if you have fast teams working on fast growing companies against, like, the dynamic landscape, which is AI right now. And then we're trying to serve kind of infrastructure layer for that. And, you know, it's just it it it could it could be chaos at times, if that makes sense.
Jack:Scaling DevTools is sponsored by WorkOS. At some point, you'll land a customer who needs enterprise features like audit trails, SSO, role based access control. You could spend ages tearing your hair out, building these things yourself, or you could use WorkOS. Let's hear from Uplal from digger.dev.
Utpal:I can speak for open source companies because I think that's where I have the most experience personally. If you're open source and you're doing enterprise first, the minute you think about monetization is when you should think about Work OS. How it's designed is that you can start as early as day zero. But for us, it wasn't day zero. It was closer to when we first started monetizing because we didn't have a sign up at all.
Utpal:People could just anonymously use our tool. To be honest, if we do that again, I think we'd think about that on day zero, to be honest, because, like, should have done it on day zero ideally. Anonymous usage should be permitted, but you should know who's using your tool. It should be opt in 100%. But it'd be great to have auth from day zero.
Utpal:So, yeah, that's what I think.
Jack:Thanks, Work OS. Back to the episode. How do you move fast is, like, kind of hard. Yeah.
Tuhin:Good question. I mean, may maybe actually it's a you know, you you just joined the small coffee. It's like, what would you say is, like, the the number one thing that you'd estimate your past? I'll tell I'll tell you mine after that. I bet you're saying.
Tuhin:Okay. I I bet you just told me the same.
Jack:I think ours is, like, focusing on the right things and, like yeah. That's it.
Tuhin:I don't know if it's the same. Yeah. They kinda say they but I think, like, the biggest thing, which is, like, we need you need to do it in a really small company is that you just need to kill the feedback loops between you and your customer.
Jack:Mhmm.
Tuhin:And I make them really, really, really fast. And and I think that like, that is just like the biggest driver. And like that is everything from how long does it take you to ship something, how long does it take you to hear the feedback about something, how long does it take you to improve it, communicate it, and then test it. So, like, we we we rely very heavily on being deeply embedded without that sword, and I think there's, like, a real upside on both sides of that.
Jack:Yeah. How do how do you get deeply embedded?
Tuhin:We move forward deployed in your routine, but I think it's more just like the culture, a culture thing beyond that, which is like, you know, we we start everything not with technology, but with what is, like, the user problem that we tried to solve and what are we trying to unlock? The customer, or no we can move backwards?
Jack:Like, you're you're optimizing from, like, the same you got the same optimization as them rather than optimizing, like
Tuhin:Exactly.
Jack:Once you hear the yeah. Oh, this needs to be faster. You don't understand why or, like, yeah.
Tuhin:Yeah. Exactly. It would it's like how how pure can that, like, be back to the beard? Because a lot of folks, like, have to be slowed down when you start to rely on, you know, user surveys or or, you know, the product manager who does who goes and does a bunch of user studies and and what we want with like our engineers to be as close to the engineers of this setting. And and if they're communicating directly, I feel like we all we we will get to the right long term answer.
Jack:Do you do a lot of like in person stuff? Because I know it's just with us like sometimes like people don't like you get like Slack messages and then yeah.
Tuhin:Yeah. We we we try to. A lot of our customers, you know, we're we're all engineers. Like we we would rather just get on the Zoom call, turn the video off turn the video off. Yeah.
Tuhin:And and and, like, eight minutes in, be like, are we done? You know, like, it's it's I I it's a lot less coordinated than that. It's a lot more impromptu. I'd say it's like imagine I pair programming with your customers. Like, that's like the the way to put it, I'd say.
Jack:It's kinda like a side point, but do you how how do you, like, manage your time? Because it seems like you're quite good at that.
Tuhin:Oh, no. I'm not. With customer, I still do a lot of sales stuff. I you know, I'm I'm usually pretty close during, like, the rollout of a lot of customers. I do a lot of recruiting and then, like, the well, the kind of more boring investor stuff as well that I end up spending a lot of time on.
Tuhin:So I don't think I don't think I managed my time that well, to be honest. I think I just end up working more. And so and so yeah. But, you know, I think it's a it's a bit instead of like what you said about yourself out there, which is, you know, it's like a jack of all trades to have to buy more than anything else. Like, you you kinda do everything.
Tuhin:You kinda do every you do everything kinda bad. I'm glad you put it. Yeah. Yeah. But but but but but it'll work out.
Jack:Yeah. I mean, I knew that was the case for me. I thought that you would be you know? Okay. That's that's super good to know.
Jack:One thing I wanted to ask you about is I've seen you talk about, like, how you wanna work on or you wanna push anything that will lead to more inference. Yeah. I thought that was a really interesting way because maybe other people, DevTools, could think about what their inference is, and I just love to, like, hear about that.
Tuhin:What we believe is that every company in ten years from now either needs to be AI first or AI enabled. And, you know, if that is true and all sorry. Cool. I think the second piece is that, over time, we believe by a lot of model model capability gets commoditized. And, like, I I don't mean, like, actually come out that and just like, hey.
Tuhin:Like, the pro like, we we start to converge at some point Mhmm. Barring step function changes, big step function architectural changes, and what models are capable of. If that is the case, you know, open source will have a big part to play, and custom model will have a big part to play. If that is the case, there's gonna be a lot of models and a lot of applications that need models, and they're gonna need a, like, you know, underlying infrastructure layer. And that's why we work on inference to start with.
Tuhin:And so, like, you know, we we we we think this is the biggest the biggest market, the biggest problem, the the thing that is blocking end customer experiences of AI is how well you do inference. Focusing exclusively on inference has, like, given us a lot of clarity and, you know, it makes it easy for us to understand what to what to build. But, naturally, you start to think about it. It's like, hey. Where are the other places where this can where we can plug in from a product perspective?
Tuhin:And, you know, one way we think about it is that there's a bit of a flywheel that happens with inference is that with you know, inference creates data that you can then use to fine tune other models, which then create more inference. And along that whole journey, you need a lot of things. Right? You need data collection pieces. You need evals.
Tuhin:You need training tools. Then you need serving tools. And we're like, that whole inference. So this is where we think, like, you know, we set as a company and, you know, we think that, look, if we're buying if we're building our core thing will also be inference, but if we're building a treatment product, which we've built recently is in service of, hey. We want the models to run-in days 10.
Tuhin:If we end up building, like, a evals framework or integrate with these great evals products like this today, that is because we we think that's gonna lead to people using more models. To the extent we've built a post training product at some point, it's good. Like, our approach training product, it's because we think that those products even look like it for themselves. So we really, like, our our North Star is very much is what we are building directly or indirectly affecting the ability for these companies to run more models and do more inference? And if so, that's how we think about what whether Bayesian will build it at some point.
Jack:Yeah. That makes sense because I guess inference is when someone's actually using it, and so or, like, your all the the other sides are, like, making that more useful, use it more Yeah. Like, train. Yeah.
Tuhin:No. No. Then that's right. I got that that's is like is like inference just begets more inference. And and all all all you know, we just wanna make sure all the scaffolding and tooling exist that.
Tuhin:And, like, whether it's through us or whether the ecosystem that we plug into, like, that is very important for us and stuff. Like, even the partnerships we do, like, they're like, will this lead to more inference? Like, when we when we go and work with a new cloud for compute, it's you know, that is so we can provide more capacity to do more inference. And, like, you know, and the the tagline of inference is everything and, you know, inference is based on, like, that that's how we want to think about the company and the product and the types of things we build.
Jack:What what do you think are the key, like, developer experience, like, factors in inference for some of us?
Tuhin:Yeah. I mean, look, I I think, like, the biggest things in general for us are speed. Like, you know, the I I think developers get frustrated when things are slow, when the the second one is, you know, debuggability. Like, know, getting a model ready to get run is, you know, pretty hairy at times, and I think to the more extent you can be transparent about what's going on under the hood, you know, the more developers do your own. And the the more, like, let the less, like, they are to, you know, be like, oh, I I'm frustrated.
Tuhin:I just wanna go and do this myself. Mhmm. And with the the third one probably is, like, the bug berdy, and, like, that's the same thing. It's along the same line. We just hey.
Tuhin:Like, there's all the developed pretty weird stuff that happened while you're trying to package up a model and get it ready for inference. And then stuff that happened after the fact, you know, around, I have the model running in production. What what what tools and data do I have access to to understand how it's all going?
Jack:It does sound, like, very analogous with, like, kind of running just, like, classic compute in many ways. Like
Tuhin:I mean, it's not it's not so much different than, like, other like, the value that Vercel provides or a Platonscale provides. You know? It's like, you know, you it's like, make it easy to get started. Make the easy things easy. Make the hard things possible.
Tuhin:So that's like, don't let the don't let the developer get blocked when they want when when they want when they want. Yeah. And then, you know, at the end, like, when, you know, I need to observe the yeah. I need to understand what's going on in my systems. Because, yeah, so easy to work with, layers of depth, and to on observability are, like, kind of a key terms that I think we could think about.
Jack:Yeah. That makes sense. Yeah. I think that's something that we're struggling with at the moment. It's like sometimes, like, when it doesn't work, making sure people understand why it didn't work and stuff.
Jack:And I yeah. It's it's very frustrating for our years, I think, sometimes if we haven't exposed that and like Yeah.
Tuhin:And it's really hard, I think, because it's that like, you just need a lot of depth a lot of time. Yeah. Because at some point, like, you know, the we we have the developers on our team who are using it on the software and, like, you know, when they run into a hurdle, you know, they get upset and I'm I'm just gonna go build this myself. And and like, it's like it's like a terrifying thing, but that's exactly the type of thing, you know, you you don't wanna elicit in your customers.
Jack:Yeah. It's tough to make sure you expose that right level of like abstraction without hiding things.
Tuhin:Yeah. Totally. The the balance is hard. The principle there that we use is easy things easy, hard things are possible.
Jack:Yeah. That makes sense. So if it's something very basic, they can just do it, like, probably don't even have to check the docs, it's like obvious. Yeah. But if they want to do something niche, hard, they can because you've exposed all the knobs to
Tuhin:the Exactly.
Jack:Yeah. That's that's actually, that's a really good way to put it. Everyone should everyone should note that down. Okay. I wanted to ask you about the training products that you're launching or have launched.
Tuhin:Yeah. Sorry. Yeah. Yeah. Yeah.
Tuhin:No. Of course. We're we're in beta right now. I think we'll come out of beta pretty soon. We'll go GA probably by the time this episode's ad.
Tuhin:We we just keep getting asked. It's okay. Can you bring this the base 10 inference experience for training? The way we think about training is not, you know, you bring your data and outcomes and model. It's very much like how do we get developed with the ability to train models using base tenants or what that means is an obstruction above compute with all the developer experience things that we talked about.
Tuhin:And so we just kept you know, once you hear enough times, you just end up building. Like, know, one one other way to think about it is from, a business perspective is that oftentimes when people, you know, would be running a model, they're like, oh, can I just train here as well? And we'd be like, oh, no. Go use this other product and then come back to us. And that's just, you know, it it just becomes leaky at some point where it's like you're giving someone else a lead for the Inference Zone because at the end of the day, most of the spend will be on Inference Zone Diamond.
Tuhin:So it's a it's a lot of, like, keeping cost like, there's a platform play to it. There's, like, you're keeping customers close to in one place part of it. There's also something I said to you earlier, which is, you know, we want them to train on base 10 so they can run-in front of base 10. So, like, they'll have bases run with the the more in front of beta. And then this is the business reading part, which is, hey.
Tuhin:You know, they're coming to us. They want to spend more money with us. Like, why are we sending them elsewhere?
Jack:Yeah. Yeah. That makes that makes a lot of sense. I just wanted to ask you two more questions. Yeah.
Jack:One is just this is like, coming out this from, like, someone who doesn't know a lot about the space, is, like, kind of access to, like, GPUs, does it kind of become part of your, like, almost, like, specialty? Like, does it
Tuhin:Yeah. I think it's more than that. I I think, yes. At the end, it's yes. Like, I think, you know, capacity is hard and flexible capacity is very challenging.
Tuhin:And, you know, one one great thing that we have is two things. One that we have 10 different clouds that we work with in 40 regions, and so we have a lot of flexibility. We also have, like, you know, collective bargaining ability to some extent where we can aggregate demand and get better pricing and pass that on to our customers. So I think it is a big feature. We have a big team on no.
Tuhin:We have a small team on it doing big things. We have a small team on it doing big things, but it's a and at some point, it becomes somewhat of, like, a competitive advantage. For us, our ability to just kinda, like, pluck compute from somewhere and and add it to the cluster and expose that to customers. But I you know, we're not the cheapest place to get compute by any means. You know, like, if you're going to use base 10 as a source of compute, one, you'll probably be pretty frustrated because you can, ostensibly, only do inference on that compute.
Tuhin:Mhmm. And the second thing is that, you know, if you just need compute, you can just go to other places and get it cheaper, way cheaper. So, I mean, you should do that. So but, it is part of the offering. I recall, like, batteries included to some extent, but it is not the offering.
Tuhin:You know, we're yeah.
Jack:Yeah. So it'd be like Versailles kind of like you would like, in that you know, if you just cared about, like, price of compute, go to, like, Hetzner or something, but you want the developer experience and stuff and Yeah.
Tuhin:Yeah. Yeah. You're buying the attraction to some extent.
Jack:Okay. Final question. I just wanted to ask you if you have any advice, anything that you're always saying to other DevTools founders earlier on, earlier earlier stage than you guys.
Tuhin:Yeah. Look, I I I'm in no place to give people advice. You know? I could probably better pay the only yeah. The the the the only thing I'll say is, you know, just, like, stay close to customers, and I just, like I think some of those, like, early obstructions in technology decisions you you you make, you know, a lot of them you have to stick with for a long time, and they're a lot harder to rip out.
Tuhin:So, like, you know, how how always I don't rush to market and just, like, hone the product from developer experience before I don't scale too much. Because the minute it's out then and there's, like, you know, and people rely on it, changing thing becomes really hard and you're kind of stuck with what you had.
Jack:Yeah. Is that how do you balance that with like the being like putting it in people's hands and stuff? Would you just do like a beta program or something?
Tuhin:Yeah. Use beta. You just keep it small. Also like most developer developer tools are generally built by developers, and think it's, like, can also just, like, trust your intuition. I'm like, you know, I think most people know what is good and what is bad and, you know, yeah, like, until you are truly delighted, just just wait.
Tuhin:Okay.
Jack:Yeah. That's great advice. And I feel like the thing you said earlier as well about, like, shipping fast is, like, like, reducing the distance between customers or, like, the feedback loop.
Tuhin:Yeah. Yeah. Sure. Yeah.
Jack:Yeah. That's huge. Thank you so much, Tin. Where can people learn more about Base Ten and about you?
Tuhin:Base10.com is our website. We're on LinkedIn, Twitter, all those good stuff. But you can also just email me at if you're on chat, I'm always happy to chat at two hundred base ten dot com. Dot com.
Jack:Okay. Amazing. Thanks so much for joining, and thanks everyone for listening.