From frontier labs and enterprise platforms to emerging startups reshaping entire industries, The Deep View: Conversations podcast interviews the brightest minds and the most influential leaders in AI.
Jason Hiner (00:01.848)
So Rob May, you are a startup founder who's had multiple exits and you're now working on AI with Neurometric AI. One of the things I want to talk about, there's a lot of things that I want to talk about with you, but one of the things I want to talk about is how you choose what you work on because you certainly could work on a lot of things. But before we get into it, just thanks for being on the show.
Rob (00:27.081)
Yeah, thanks for having me, Jason.
Jason Hiner (00:28.844)
Yeah, very good. before we get into exits and AI and all these things, I thought maybe we could talk a little bit about how you and I met, because we go back a long time. I want to say your first startup or one of your early ones I wrote about a long time ago, over a decade ago. Yeah.
Rob (00:46.805)
Yeah, first venture backed is how I describe it. I had some businesses before that, but that was my first venture capital backed business. Yeah.
Jason Hiner (00:52.096)
Okay, so your first venture capital backed startup, you and I was how you and I met. So interestingly enough, we have a mutual friend, also a startup founder, Todd Earwood. And I remember Todd saying to me like, hey, there's, you know, my buddy Rob, and he had mentioned you before he had said to me multiple times, you really need to you really need to meet Rob. And I was like, okay, yeah. And so he says to me, you know, is coming on the end of the year. And he's like,
you really need to talk to Rob. He's got this new thing that he's doing and I really think it's smart. yeah, I think you could write about it and really enjoy it, but if not, you just would enjoy meeting him. And so then one way or another, he connected us and we ended up grabbing lunch maybe, yeah?
Rob (01:38.089)
Yeah, yeah, well, and we knew some other scene that people in common, think, right? Like Jay Garman maybe and a few folks like that because it was, we were both in Louisville, Kentucky, you still are, and I was not, if not anymore, but it was, and the tech community there was relatively small. that was, know, so how we knew some people, so.
Jason Hiner (01:42.962)
yes.
Jason Hiner (01:49.101)
Yep.
Jason Hiner (01:55.022)
True, true.
Jason Hiner (01:59.65)
That's right, we had some people in common, but that was the first time we met, right? Was when we, you and I had lunch, whatever we texted and said, hey, let's get together, Todd says you had something interesting I should see. And we just had lunch and the thing that I, one of the things I love about you that I realized from that first time we met is like, you, one, you're just really smart, but you also give it to me straight, you know? From that very first time, and so.
Rob (02:03.614)
Yeah.
Jason Hiner (02:27.956)
I really liked that and you presented me this idea which was Backupify that you were working on. one of my favorite products or maybe my favorite startups are the things when somebody tells you something and you're like, holy crap, why has nobody done that before? Those are the things that I almost always have this gut reaction to that I'm like, that's something I will cover or write about because if I had that reaction, probably others are gonna think the same thing like, my.
Rob (02:45.235)
Yeah.
Jason Hiner (02:56.622)
Why did nobody do this before? And Backupify was this idea at the time, gonna seem, you know, maybe outdated now, but it was really interesting. The time was we were spreading stuff through all these like social media accounts and other places online and we were having instances of people like having their stuff erased by accident, like the company just rebooted a server or whatever and they lost all their stuff. So Backupify was taking all of your cloud accounts and essentially,
Rob (03:10.547)
Yeah.
Rob (03:19.248)
Ahem.
Jason Hiner (03:24.802)
backing them up in one place, right? Am I characterizing it right?
Rob (03:27.366)
Yeah, yeah. we started, when you wrote about it, we were still pretty consumer focused. We had some mutual friends who were CNET people who had started a company, think it was called, was it FriendFeed or was it something? No, was Profileactic. Yeah, Profileactic or whatever. And it was doing a bunch of like that. So was a big thing. You had all these online accounts. So this was December of 2009.
Jason Hiner (03:38.254)
yes.
Sean Morton. Yeah, yeah. That's right.
Jason Hiner (03:52.834)
Okay.
Rob (03:55.059)
I don't know if you know this, I don't know if I ever told you this Jason, but we were about, we almost shut down the company before you wrote about it. we, so back up if I was a year old.
Jason Hiner (04:01.836)
Okay?
Rob (04:03.628)
And we were backing up like Gmail accounts, Flickr accounts, all this kind of stuff. And it was hard to get in front of people. so we had, and we couldn't raise money from VCs because VCs would say at the time, this was crazy, people aren't going to move their data to the cloud. It's too insecure. which makes no, it's crazy to think that they thought that. But we, after a year of being up and running, had 2000 users. Most of them were
Jason Hiner (04:21.325)
Yeah, right. That was the vibe.
Rob (04:32.34)
free a handful were paying us like 30 bucks a month and So I talked to the handful of investors we had and I was like look I think we're gonna shut this down But I was I was begging like I spent and I think startup founders should do this I spent about 20 % of my time just like emailing random journalists like why you should write about us and Most people wouldn't do it. And so when you and I met and you wrote about it and then it got picked up on tech meme we signed up 10,000 people the next day and
Jason Hiner (04:57.229)
Wow, wow.
Rob (04:59.196)
And that just led to a boom. then suddenly like, and this was like, this was right before Christmas. So everybody was busy and still like VCs were calling me they're like, what is your, company? We're hearing about it everywhere. like, cause it's the interesting thing about the media, right? It's like when you lead with an interesting story, it's like it gets picked up from other places. And so yeah, that's a, you, you, kind of transform the face of that company.
Jason Hiner (05:03.437)
That's right.
Jason Hiner (05:15.341)
told
Jason Hiner (05:21.719)
That's crazy, you hadn't ever told me that, which is amazing. Todd had sort of said something to me before, because later, I can't remember at some point, he was like, you should have Rob write you a testimonial, because he's like, this changed everything. So the interesting thing was I wrote that story, I did the story, I really, when I heard about it, I had that aha moment, like, man, why has nobody thought of this before? And so that's how I wrote it, I wrote it like that.
Rob (05:24.339)
Yeah.
Rob (05:34.524)
Yeah.
Jason Hiner (05:51.836)
that was probably in the story somewhere. And it was also, like you said, was mid December of 2009. The traffic is bad, right? It's a bad time to write tech stories because there's not much going on. But in a sense that worked in your favor, right? Because you weren't competing with other big tech stories at the time. So that thing went out there and that story absolutely blew up almost immediately. And I, much to my surprise, cause this was like a late December story that, you know,
Rob (06:04.979)
Yeah.
Jason Hiner (06:21.519)
when traffic is bad, but it got picked up by tech meme. I remember my friends at this week in tech, like they were asking me about it. They ended up covering it and they had a huge audience at the time. You it was like the leading, that was the leading podcast on, tech, know, on, you know, on Apple and other platforms. And so, you know, they, they talked about it a bunch. We got linked to from all over the place. Like literally everybody was, was linking to that story and re-amplifying it.
Rob (06:28.489)
Yeah.
Jason Hiner (06:51.439)
And so, yeah, I mean, it was just one of those crazy things that just went pretty wild. And yeah, and then back upify, I remember, you know, I had this really interesting journey, but I do remember too, Jason Calacanis like tweeting about it, like the day after when it went, it hit tech meme, Calacanis did. Okay.
Rob (07:06.985)
Yeah.
Rob (07:10.74)
So he called me right after Christmas. I don't know how I got my phone number, but he called me and he proceeded to introduce it. Jason was not nearly as popular back in 2009 as he is today. He was well known in tech circles, but he was doing this new thing called Open Angel Forum. And the way we raised money was, so he called me and he said, I love what you're doing. I want you to come to LA and pitch at this thing in January called Open Angel Forum that I'm doing.
Jason Hiner (07:18.929)
You
That's right. That's right.
Rob (07:40.085)
I was like, hey man, I like, we actually, saw it, but the applications are already closed. He's like, no, no, we haven't, we haven't picked the companies. It doesn't matter. I want you there. So I flew out there and did it. And because it was the first open angel forum and he put it together, he'd invited all the big time. was like Chris Saka was there and Josh Coplman. And so I got Shervin Pishvar, like I got to pitch.
Ron Conway, I get to pitch all of the top investors, early stage investors in the world at that time in one place in LA. And so, you know, we went in there with no money about to shut down and we left with a seed round. So that was cool.
Jason Hiner (08:13.995)
That's amazing. That's amazing. So yeah, it's one of those crazy things that there's a little bit of luck there and some serendipity. But from that, one of the things that I took away, and I followed what you did all the times after that, because I was like, you were really honest with me about, here's what the product doesn't do yet. Here's what we do. We have a narrow remit, and we're trying to do it really well.
Rob (08:20.563)
Yeah.
Rob (08:38.707)
Yeah.
Jason Hiner (08:43.919)
And I respected that a lot. And I respected that you didn't try to like snake oil salesman the thing. You were real honest about, here's what we're about. We think it's pretty interesting. We think there's a real reason to have this and to do it. And so you made it a really easy story to write about. So for founders out there that are pitching journalists and stuff of like, don't be a marketing person. Be more like a product.
Rob (09:12.093)
Right.
Jason Hiner (09:13.889)
person because you spoke to me like a founder, like a product lead, and you were real clear about what you were trying to accomplish, what the product was trying to do.
Rob (09:15.709)
Yeah.
Rob (09:23.22)
Yeah, I agree, and I think it's gotten even worse, right? Because in a noisier world, what we're lacking more than ever is authenticity, and I think people want that. I think they want that in their products, I think they want that in their media, and so with all this AI slop now, it's just gonna get...
Jason Hiner (09:30.786)
Yeah.
Jason Hiner (09:40.481)
can get worse.
Rob (09:40.584)
worse and worse and worse. And I think you do have to lean in, particularly if you're an experienced product person, I think you can credibly say, and here's what's coming next. And yes, we know how to build products and do stuff because we've done it a lot. And so it doesn't necessarily, it's not a bad thing to foreshadow what's going on.
Jason Hiner (09:49.911)
Yeah.
Jason Hiner (09:57.805)
Very good. So that for you was like a launch pad. Like you did that. You eventually sold that company. You had your exit. You did multiple exits, you know, from there.
Rob (10:01.139)
Yeah.
Jason Hiner (10:09.364)
And so, yeah, would love to, you're now the CEO of this AI company, Neurometric AI. You just came out with your leaderboard on thinking models, which is thinking algorithms, sorry, models is still sort of burned into my brain all day long, on thinking algorithms, which is really interesting, and we wrote about on the DeepView. you could have had, you had multiple opportunities. You could have done multiple things. One of the things I just would love to get into with you is,
Rob (10:14.482)
Yep.
Rob (10:18.739)
Yeah.
Rob (10:24.595)
Yeah.
Jason Hiner (10:39.298)
you've had multiple exits, you sort of have your choice of what you wanna work on, what you wanna spend your time on. How do you figure that out? know, cause I think that's something that a lot of people are, that's their aspirational part. They wanna get to that point. But it's not always easy when you get to that point when you have multiple choices or a lot of choices.
Rob (11:00.562)
Yeah, sometimes it gets harder.
Jason Hiner (11:02.144)
Deciding what, yeah, is tough, right? So how do you decide? How did you decide with neurometric AI and how did you decide in general? Because obviously you've had that moment at various points in your career.
Rob (11:04.212)
you
Rob (11:14.292)
Yeah, I thought at this point, it might, so when I, so we sold Backupify in December, 2014. Um, you know, I was 38 years old, so I just thought like, oh, I'm not going to retire, but I'm going to work another five or six or seven years and then I'm going to retire. Right. And, um,
But the thing that I chose to do next, so I looked around in 2015 at like blockchain, internet of things, corporate messaging, because Slack was kind of big, and AI. And I decided like, huh, looks like this AI stuff is really starting to get some legs. And so I've been an AI ever since then. And now we're in this point where...
I mean, we're still in the early days of it and it's massive. I feel like, so for one thing, it's like, I would feel terrible sitting on the sidelines and just like watching all this go by when we're in one of the most interesting technical times in the history of the world. In terms of how to pick what to do, it's funny, people don't believe this, but I would actually prefer not to do a startup. It's just no matter who you are.
Jason Hiner (12:12.78)
Even though you've done like four or five now, yeah.
Rob (12:16.178)
Yeah, no matter how much money you've raised and what you've done, it's always hard because you're always doing something.
you're either resegmenting a market and you're going against an incumbent or you're doing something that's new that people are unsure if it's going to be big or if it's going to matter. like, it's just so hard to find believers. I mean, think about this, even your big public companies, Tesla and Nvidia, about half the market disagrees with whether going in and shorting the stock or selling the stock. So now whittle that down to something that you have no data about. I mean, it's crazy. And so it's really, really hard, but
Jason Hiner (12:39.404)
True. Yeah.
Rob (12:49.588)
But I spent some time in VC, and I love investing. I love listening to all the ideas and picking the winners. running a VC fund is actually a lot more than that. There's a lot of LP communications and financial analysis, and you get bogged down in a lot of legal things in these deals.
Jason Hiner (13:09.472)
How long did you do that, Rob? How long were you doing VC?
Rob (13:13.46)
I mean, I've been parts of funds as like venture partners and stuff part time for a long time, but I did it full time for two and a half years. there was a lot to love about it. It's a super flexible job, which is cool. And you can make a lot of money if you're really, really good at it, but it takes a long time. You're six or seven or eight years in before you know if you're any good.
Jason Hiner (13:20.064)
Okay.
Jason Hiner (13:34.22)
You
Rob (13:35.923)
And it's just, you know, and then it's, it's, actually gets a little bit repetitive, like early stage founders typically need the same kind of thing. spend a lot of time just introducing people to, you know, to upstream VCs and, and stuff like that, that I didn't like. then, the bigger problem for me was when you're an operator and you're doing something in a business every day, every week, every month, you have goals, you're trying to move forward. Sometimes they turn out not to be the right things, but at least you feel like you're making progress. There's a lot of times in VCs you go and you talk to a bunch of companies and you do some research on some spaces and it's like three months and you're like,
done nothing except rule out a bunch of investment opportunities, right? And it just, it's, that didn't feel as good with my personality. so I, and then I also got a little bit burnout on the early stage stuff. You know, I started doing, I'm an advisor to a hedge fund now on AI.
Jason Hiner (14:09.229)
Wow.
Rob (14:25.618)
It's a lot more fun to help them think about it. There's so much more information on public market stuff. so, so the reason I chose Neurometric was it just seemed to have this combination. Basically my rule was I would go back and be an operator if I felt like something could be really, really big. And I think inference, AI inference as a category is going to be one of the biggest markets in the history of the world, because you're effectively taking all these workloads that humans do. And, you know, right now we can automate realistically four or five, 6 % of them, right? And this is already us.
multi hundred billion dollar market. Think about what it's gonna mean when you can automate 80 % of it. Like I don't know if we'll ever get to a hundred percent. It might be a really long way away. But you've got a good 10, 12, 15 year path to get to that 80%. And these are gonna be multi trillion dollar markets taking these human workflows and converting them into compute workflows. And you need a whole bunch of new compute and infrastructure and everything to manage that. And so what that means as an entrepreneur or an investor is it means it means your edge cases and your small
narrow niche parts of the market can be billion-dollar outcomes and and and all that kind of stuff right so that's So so I decided to do it because that it's it's very intellectually stimulating. It was a big big opportunity and Yeah, and I you know I felt like I was able to put together a team that would Approach this very differently than what a lot of people have been doing
Jason Hiner (15:50.882)
Where did you start? Was there already a founder and you joined or was this an idea that you started with? You're like, I need to go find, build a team to do this thing.
Rob (15:56.55)
No.
Rob (16:01.618)
Yeah, started. the initial idea wasn't what we ended up doing. The initial idea, so I'm a hardware engineer by training. used to do ASIC design. So I used to design computer chips for military and space applications. And a lot of people don't know this, but for most of history, when I came out of school in the early 2000s, hardware was kind of dead. Processors were getting faster. Programming languages were getting higher.
levels of abstraction. so there's just everybody went into software and not hardware over the next 20 years. What happened was these AI workloads came along and people don't understand how much these AI workloads even today stress the cutting edge GPUs. They're not nearly as efficient as they could be.
And so these compute workloads and they're getting worse and worse. It's it's led to this explosion in hardware innovation. There are these chips like Samba Nova and Cerebras and edge chips like Mythic and Positron and all just all this all the stuff. And and so as a hardware geek, it's like amazing. It's an amazing time. So so last fall, so fall of 2024, my last company got acquired. It was a so so exit and
Jason Hiner (16:50.912)
Okay.
Jason Hiner (17:04.183)
Yeah
Rob (17:13.46)
was looking at what to do next and Cerebrist had filed for their IPO. They haven't since gone public yet, but a family office asked me some questions about it. Would I take a look at it for them? And it led me down this path of studying these AI accelerators. So I pulled in one of my old co-founders, guy named Byron Galbraith, who happens to be an expert like CUDA programmer. So when you hear about the language that CUDA is the moat for Nvidia, people say that's the language that people use to take your model and get it on the hardware and tell it what to do.
Jason Hiner (17:34.157)
yeah.
Rob (17:43.367)
on the GPU. Byron was working with CUDA back in version 0.2 a decade ago or more. And so we started looking at these hardware accelerators. And we initially thought, companies are going to use some GPUs and some of these other accelerators, and they're going to need a way to work through all that. So we were going to build a system that managed that.
Jason Hiner (17:49.293)
Wow. Yeah, I was going say that's like 10 years ago, right? Yeah.
Rob (18:08.62)
And Byron kept asking me like, but like why, why is the question really became why is inference going to grow that you're going to continue to need these? And that's when we kept saying, you know, he kept asking me and every time I would say, oh, it's these, it's these, you know, inference algorithms, test time compute, post-training algorithms where they go by several different names. They all mean the same thing, which is the way you run the models and the things you can do at that point in time. And after like the third time I said it, I paused and I went, wait, is that where we should be focused? And.
Jason Hiner (18:35.998)
You
Rob (18:37.266)
And it's let's pull on that thread, right? So we pulled on that thread and then we were working with this hardware guy, Dave Rochework, who built a company years ago that built a $9 CPU. So he's like an optimization nerd for hardware and supercomputer guy. And then the three of us found another friend that I had, Calvin Cooper, who...
Jason Hiner (18:39.552)
Yeah.
Rob (18:58.492)
I'd gotten introduced through a mutual friend and he and I were just kicking around ideas as I was working through all this and he was being super helpful. He's the least technical of the three of us, so he does all the business ops now. So we just started going down this thread of model management, the algorithms, the thinking algorithms, post-training algorithms that you can run on the models and where all this stuff is gonna go and decided to build Neurometric. So it was really sort of like...
I would say it was almost like wrong idea brought in a founder, helped me get to the right, co-founder helped me get to the right idea. Yeah.
Jason Hiner (19:31.82)
That's pretty common though, right? Like most startups, they sort of have directionally maybe the right idea, but by the time they get to market, it looks a lot different.
Rob (19:46.857)
Yeah, it's like going on vacation, right? It's never exactly what you think it's gonna be. You always have it in your mind, like it's gonna be this and here's what we're gonna do. then, you know, it rains one day and it's cold one day and, know, stuff like that. And you end up shifting a little bit. And so it's kind of like in the general vicinity of what you meant to do, but not like you planned. And I think startups are often a lot like that.
Jason Hiner (20:08.812)
Okay, another one of the things I love about you is that you're a little bit of a contrarian. And this comes across a number of ways. And look, to be completely, complete disclosure here, I am too, I'm the person that my friends and family, drive them a little bit crazy. Because they'll say A, and whatever A is, I always say, what about B? And so it sort of makes you annoying to have as a friend or a family member, but it makes you good.
Rob (20:14.749)
Yeah.
Rob (20:30.813)
Yeah.
Rob (20:36.574)
Yeah.
Jason Hiner (20:38.836)
as a journalist. it works out all right for me. But I think it also makes you good as a founder or as somebody who's really trying to get to the truth about something, right? Get to what do we actually know? How do we know it? What are we trying to accomplish? And how do we cut through the noise to get to what's real? And so you mentioned, so we talked recently about this new thing that normetric AI
Rob (20:39.975)
Yeah.
Jason Hiner (21:07.236)
released, which was this leaderboard for thinking algorithms. And you gave me a great example, and this maybe I should back up. So we wrote about this on the DeepView, so you can go and read the story about the Neurometric AI Leaderboard. You can also go to leaderboard.neurometric.ai, right? So the thing is, your main idea here is that
Rob (21:10.28)
Yes.
Rob (21:17.704)
Right?
Jason Hiner (21:35.147)
the future of this is gonna be more these smaller.
more specific models are gonna be better for AI workloads, they're gonna be a lot more efficient, can potentially make your workloads faster and less expensive. So I'm boiling it down, we can unpack that. But what I wanted to get to in sort of your way of sort of being a contrarian thinker is you said that, think of it like a human being. And who do you ask questions to? So say that for me again, because I...
Rob (21:51.667)
Yes.
Jason Hiner (22:11.435)
I loved that analogy and that for me solidified it, Of like, okay, now I understand what we're talking about.
Rob (22:14.034)
Ahem.
Rob (22:17.692)
It's, the best, when you think about where AI is going to go, you, you, best place to look is our only current model of high level intelligence, which is human beings. So I always start with like, okay, well, what does a human do it? Right. And, and when you look at it through that lens, you realize a lot of things, like we could talk about, like before we talk about the models, we can talk about people are having this problem of like, how do you evaluate AI and know if it's good or not? Right. Like if I, if I tell this thing to go off and write a research report,
Is the research report good? How do I know? Well, so I look at it on the human side and I well, how do know if a human does it good? You're like, well, somebody reviews it. Well, that's probably gonna happen in the AI world, right? And so I always use humans as parallels. And when you do that, you start to realize that AI is not gonna solve every problem because there's still these things that are, the world's probabilistic and humans still have problems and there's some things that are just hard and ill-defined. So from the model perspective,
I always tell people, so there's this point of view, a lot of investors have had this point of view and a lot of the AI market had this point of view until maybe late summer 2025 of like one model to roll the ball. You're to build one model and it knows everything. But in every physical system in the world, there is this concept of like diseconomies of scale, right? Which is as things get bigger, you lose something sometimes.
And so what starts to happen is like, running a big model, a query might be 800 milliseconds time to first token, whereas with the small model, it might be one tenth that, one fifth that, something much faster. And so I always tell people like,
Again, use the human analogy. Think about the smartest person you know. And I can tell you for me that as Stephen Wolfram, I've met Stephen Wolfram several times. He got his PhD from Caltech at 19. He is a brilliant, brilliant human being on any topic you ask him about. It's like, would you hire Stephen Wolfram to clean your bathroom? Would you hire Stephen Wolfram even to do your accounting? Like, no, there are people who are specialized in those things and are better at those things.
Jason Hiner (24:01.205)
Wow.
Jason Hiner (24:19.948)
Would you Stephen Wolfram to predict who's going to win the Heisman Trophy? Like, no, you wouldn't.
Rob (24:23.026)
Yeah, no, exactly. It's not, it's not his thing. And so, and even if you could build a model that had all that in it, the, that model has all this knowledge that it doesn't really need. like, why run that big expensive model every time, if you just want to predict the Heisman, right? Run your smaller sports model. so.
So then the question is like, why didn't we just start with the smaller task models? Well, because you couldn't, it wouldn't have made sense because if I give you like one simple task model, like, so what, that doesn't change your life, right? So these, these big general models are the place where you can start and be like, that's cool. You know, can do this, I can do that. And, and then what happens if you look at the AI maturity curve that we see, people are going to build the way you build teams in a company. I mean, how do you start a company? It's just you, right? You're, you're, you're, you're, starting your web design company and it's you, you do the sales, you do the marketing.
You do the web design, you do the accounting, you do whatever. And then first thing you do is you offload the stuff you don't like. You offload the accounting, then you offload the marketing, then you hire more designers. This is what the models are gonna do. so from an AI maturity perspective, we see people start with one big model, they get their product up and working, they're hitting GPT-5 or Anthropic or something, and then they start to peel off the pieces where they're either like, well, that's not cost effective to really descend that to that model and pay that, or they have latency issues, it's too slow, they need faster response.
response or sometimes with these models you can get better accuracy and what's interesting about moving to small language models or other similar task focus models, let's call them, is that they could literally be the trifecta which if you have a well-defined task they could be faster and cheaper and more accurate than using the best models. And so, you know, when we started going down that path I was like, this is something that we've got to help people be able to do because everybody is going to want to do this and it also has the
advantage that if you're a big company and there's certain tasks and workflows that you would rather not turn over to all the big model vendors in the world like great now you can have we would help you create your own small task specific models and workflows and systems that manage these things.
Jason Hiner (26:29.974)
That's a great place to be as a business. Like, I'm gonna go in and I'm gonna, so you have an instant ROI. I'm gonna help you build something faster, more accurate, and cheaper. you know, whatever somebody's gonna pay you is likely they're gonna get back, right, at a certain multiple of that.
Rob (26:39.378)
Yeah.
Rob (26:50.216)
Yeah, yeah, exactly. it depends on, it varies a lot depending on what the company wants because some people like, they don't care about latency, right? They have a workload that just takes nine minutes on an AI model. And if you can shave that to seven, it doesn't matter, it doesn't make a difference. But sometimes it can make a huge difference, right? Particularly if you're looking at cost. mean, from a cost perspective, you're looking at typically like a 90 % drop to move from a large language model to a small language model that's specific to your use case.
And so not every business for every case is going to, you know, maybe like, like if your business does a hundred million dollars and you're like, and we spend $20,000 a year on AI models, you don't really need to cut that back. But, know, I can tell you, I don't know if I can say this, I don't know if I can name the company publicly, but there there's a publicly traded company that we spoke to who's like, we spend over $10 million a month on inference. And so if you can shave 2%.
Jason Hiner (27:45.257)
off
Rob (27:49.117)
Like, yes, we'll take it. And we think we're going to work with it. We'll probably be able to shave 50%.
Jason Hiner (27:51.926)
We'll take it, Yeah.
Jason Hiner (27:58.157)
it's just that inference is, there's gonna be so much that goes into inference, because that's at the level of making queries and getting back results primarily. that's, we know now that even just OpenAI alone has hit over 800 billion weekly users. And this thing's only been around for three years. We're still just at the very, very early innings of this, and businesses even less so.
The challenge there is, just like you said, is you can spend a lot of money really quickly on these things. If you're a business, your inference costs can start stacking up really fast. I you mentioned a public company that's spending 10 million a month. Wow. Wow.
Rob (28:43.762)
Yeah.
And so, and it's, and you see this sometimes, like if you're, if you're active on Reddit on the like AI forums, a lot of times you'll see these developers who they'll go out and they'll build something. And, uh, suddenly they'll complain that like, wow, it's, you know, my, my app is only serving a few users and is, you know, causing me, costing me $800 a day, $2,000 a day, stuff like that. Um, particularly as you've had, so, but there's, there's these forces, right? That compete. There's this natural force where hardware is getting more efficient, models are getting more efficient, and there's a, there's a cost curve.
Jason Hiner (29:04.448)
Wow. Okay.
Rob (29:16.118)
of drop that we see similar to what we saw with storage. But also we're developing better algorithms that think longer and do more. And so those forces are fighting, right? And so it's still growing quite a bit. But the net impact is going to be like, as the cost of inference drops, we're going to use it for more tasks. It's like Jevons paradox or whatever they call it. It's going to stimulate a lot of demand.
Jason Hiner (29:41.453)
So that's the bet that you're making with Neurometric is that you can. So in the leaderboard that you created, you built this on Amazon Bedrock and you took the models that were available on Bedrock and you used CRM Arena, which is, you know, really applied AI business, you know, tasks essentially, because you wanted to measure it based on that. And then you sort of came up with the leaderboard. Here's who does well on this. Here does well on that. And we put it in the story because of course people care about the horse race. We were like open AI.
Rob (29:47.368)
The.
Rob (30:10.984)
Yeah.
Jason Hiner (30:11.406)
guys, 120B, 120B model, GPT-OSS was number one. So people care about that. And then the Chinese models were next, like DeepSeq R1 and QAN, so great. But what you said, where it really gets interesting though, is that you found that that was the best at the most tasks.
Rob (30:18.452)
you
Rob (30:22.995)
Yeah.
Jason Hiner (30:31.702)
but it was also terrible at some task. And some of the smaller models were much more efficient at certain kinds of things. So break that down a little bit, unpack that a little bit, because that's really interesting and that's where kind of the rubber meets the road. And then we'll get to like how you're gonna sort of next create a product to help companies then really optimize their inference cost, which is, you I know that the crux of what you're building.
Rob (30:33.778)
Yes.
Rob (30:57.608)
Right, so the purpose of the leaderboard was, you we talked earlier about...
world being noisy and it's hard to get people's attention. And so the typical way you would build a startup is you'd build a product, you'd make some people like it and then you go figure out how to market it. And I wanted to make sure we started with some marketing because I wanted to already have more attention and I knew we would find things, we'll find the lane in the inference space to swim in. And so we built this leaderboard to make the world aware because it's, I wouldn't say it's a contrarian position, but it's a non-intuitive position.
that
the performance between these models, particularly when you use these different thinking algorithms on top of them, right? So for the people that aren't familiar with it, you have the same model, but maybe there's three different ways that you probe that model and ask it a question, and you might get high variability in your answers on a per task basis. And so what we were showing is like, well, if you really want to optimize your system, you shouldn't use one model, you should use three to five models, and you would probably get, and they would be different depending on the tasks that you're trying to do as a business. And so the leaderboard is about,
educating the population on that and then also being able to attract the right customers who then come to us and say, well, can you run this leaderboard analysis on our data? And I think the way that this will expand and what you'll see over time is we will do more and more task-based. You'll see things other than CRM Arena that are coming. You'll see new kinds of metrics because we've already seen there's some data that we haven't released yet that shows stuff like
Rob (32:31.604)
If you run the same model, you know GPT OSS 120 B on two different systems, right, you know Amazon and GCP right together AI fireworks, whatever it's Doesn't always run the same. You don't always get the same accuracy. You definitely don't get the same latency So the hardware configuration matters There's a whole bunch of things in there. And so what we're ultimately trying to do is help people Design their AI systems because every people want to optimize for all they have all kinds of different constraints and requirements, right and so being able to
Jason Hiner (32:59.916)
Sure. Yeah.
Rob (33:01.598)
navigate that is going to be really important. then what that will lead into, what's coming in January is...
is a tool that monitors your AI traffic, your AI workloads, right? Your prompts that go in and out and constantly gives you a dashboard of, prompts that look like this, categories of tasks that you're doing on AI could either be improved, you know, maybe save money, maybe get better performance moving to these models with these thinking algorithms. And these thinking algorithms, these post-training algorithms is our really unique contribution to the industry.
for this.
Jason Hiner (33:40.013)
So Rob, if I'm a CIO and I hear this, you know.
I'm probably gonna say to you, all right Rob, that all sounds good, how am I gonna keep track of all the different models and which ones I should use for which because the complexity gets like really crazy really quickly, right? Especially if you're talking about the same model even sometimes has different results based on, I know you have, you've explained it on the leaderboard itself, you're doing best of N and the various types, chain of thought, all the things.
Rob (33:59.188)
Yeah.
Rob (34:11.028)
Chain of thought and beam searches coming.
Jason Hiner (34:14.752)
So, you know, what do you do when if somebody says that's interesting, I would love to do it, but it's just like super complex for me to try to deal with all that.
Rob (34:23.698)
Yeah, it's a really good question. one of the tools we're coming out with is going to help manage some of that. there's going to be a lot of like, obviously, if you have 86 tasks, you don't want to use 86 different models.
and system combinations, right? Some of them can double up. so the decision you have to make as a systems designer that we try to help with is to give you the education to look at it go, well, given our constraints and everything else we want to do, we can use up to three models. We can use up to 12 models or whatever. Or these three tasks need to be optimized and everything else could kind of go to GPT-5. And so we sort of help people think through some of that. And the tool that we're coming out with that provides those, that insight
site, the next step in that tool, and this will be later in the year, probably like late spring, early summer, but the next step in that tool will be to manage that for you. So not just make the recommendation of, you you can use Llama 3 370B and that might help with this task. Well, in January or February, if you use our tool, you're going to have to go, if that's a recommendation, you're going to have to pull that out of Together AI or whatever, wherever you run your models and do it. But, um,
by the late spring, early summer, we'll be able to do that for you as well, right? it's, yeah, because the, and the, because the other thing that's happening in the industry is there just aren't enough people that know how to do this stuff.
Jason Hiner (35:36.0)
you'll automate it basically a little bit. Okay.
Jason Hiner (35:46.57)
Okay.
Rob (35:47.165)
Right. And so you, so, so you have to build that. This was something we got wrong in the beginning was I was like, we're going to build the dashboard that has all the knobs and whatever. Those have to be hidden. Actually very few people want to use those. Most companies want are like, do this for me, figure it out for me, manage it for me, because hiring AI engineers, even within the AI engineering ecosystem, the number people that understand all the layers of the stack and how they flow together, it's very small. And so, yeah, so that's, so we'll manage a lot of that dynamically for people over time.
Jason Hiner (35:57.516)
Mm.
Jason Hiner (36:05.724)
Oof. Yeah.
Jason Hiner (36:17.92)
That's great, so that's what Neurometric AI, you're really trying to go and sort of manage at the systems level something that's gonna do this, sort of the promises can help you figure out how to get the right model, the right thinking algorithms to save you money, be more accurate, be more efficient, cost less essentially.
Rob (36:40.616)
Yeah, yeah, and the other thing is like, think when you look out two years, I think working at the systems level is gonna give a lot of really unique opportunities to do things like, so imagine now a system where you have seven models, and instead of just passing a prompt into the model of, here's a customer support workload or a code documentation workload or whatever.
In addition to that, you're going to pass in data of, and here's what's going on with all the other models in the system right now. So you're going to give more environment. know, they, they, with the AI, they take your text or they take whatever and they put it in, in vector space, mathematical vector space. And they use that to create these embeddings that they put into the model. So now imagine having systems level embeddings where each model is aware of everything else that has just gone on and is going on with the, with the, within the system. Like I think these things are how you're going to really build more intelligent systems over time.
is by extrapolating intelligence to that systems layer.
Jason Hiner (37:37.006)
Nice, okay, so that's a big, that's really a big mission. It sounds sort of specific, but at the end of the day, like, that's a really, that's a really big thing to chase. Yeah. Okay. I wanna, now I wanna talk a little bit about...
Rob (37:42.824)
Yeah.
Rob (37:46.344)
Yes, for sure.
Jason Hiner (37:53.452)
the broader industry because you mentioned like you have this exposure, you've been working on AI for a decade. So you have this exposure to the full stack, right? And you have a lot of hot takes on social media about various things. That's true. That's fair about everything. And so I would love to dig into some of these things because I'll be honest, the reason that I came to work at the DeepView was this.
Rob (38:04.564)
About everything, not just AI, but yeah.
Jason Hiner (38:19.469)
I was at ZDNet before, I was focused on AI, but I was, you know, I only could spend so much of my time on it. But looking at the AI landscape, there are so many conflicting signals, it's so confusing, it's so many diametrically opposed views. And really what the DeepView is trying to do is like, we're gonna help you sort through all of that.
Rob (38:38.099)
Right.
Rob (38:42.729)
Yeah.
Jason Hiner (38:42.825)
And when I looked at that, I said, that's a 10-year mission that's gonna be worth doing because when there's a lot of confusion, when there's a lot of disruption, when there's a lot of mixed signals, that's journalist, that's catnip. There's a lot of great work for us to do to help people figure it out. So.
Rob (38:48.029)
Yeah.
Rob (38:59.828)
Right. Makes sense?
Jason Hiner (39:02.381)
I wanted to, I've got, there's all kinds of mixed signals on things. I would love to get your take on a bunch of them. let's start with hardware because you mentioned you are a hardware engineer. You understand these things. And one of the things that's the hottest topics out there right now, you mentioned it earlier, was Nvidia and GPUs and CUDA. What we've heard is Nvidia has so much demand right now that they have
Rob (39:09.588)
All right.
Rob (39:14.088)
Yeah.
Jason Hiner (39:32.388)
multiple people that are willing to buy every GPU they make. They just can't make enough to sell all the orders that they've got for them. And that's going out years at this point. So I see these stories, for example, that are like,
Rob (39:38.515)
Yeah.
Rob (39:47.827)
Yeah.
Jason Hiner (39:55.96)
Google trained all of its Gemini 3 model on its own TPUs and now it might sell some of them to Amazon. And same thing like Amazon at their conference or AWS re-invent conference, they mentioned training them. They're making some of their own chips in that. And so the stories become like, no, these companies are gonna tick off Nvidia by buying some of these or buying some of those. But the truth is not that. The truth is they couldn't buy them from Nvidia if they wanted. These people are just wanting to buy anything that they can get their hands on.
Rob (40:22.386)
Right, yeah.
Jason Hiner (40:25.903)
And so, but there's all these, there's what I see is a lot of noise about all of these things. And I would love your take on all of that. You mentioned it earlier. And then, so your take on sort of the horse race of all of the GPUs and the hardware makers. And then with that, like if these things do get optimized and smaller models become a thing, does that reduce some of the demand? And is it just that there's so much demand like to, that it's gonna still go on for years? Those are a couple of questions.
Rob (40:55.216)
Yeah, it's a great topic area to dig into and I will tell you like my personal opinion, like obviously this can't go on forever just with Nvidia getting to, you know, 50 trillion dollar market cap or whatever, but...
But I think it's still going to go on for quite a while because as you said, at the time of recording this podcast, Nvidia stock has been flat to slightly down because of the TPU announcement that Google's selling those, think to Meta maybe it said and to Anthropic and all that. One of the things you have to understand is TPUs are much more difficult to use. So AMD GPUs with RockM, which is their software.
And then Nvidia GPUs with CUDA are the two most mature ecosystems for getting your model onto a chip and having it run.
And that's hard to do because these models are very, very, very large by computer program standards. And so if you look at, if you look at what's happened and also Amazon announced their training and three chips again, but again, similar, difficult software stack. the way this is going to play out, think over 2026, think in video be popping again in 2026, because what's happening is TPUs really, and training and chips from Amazon are really only going to be available to the most sophisticated customers in 2026. It's going to be hard for anybody else to use them.
Jason Hiner (42:11.469)
Okay.
Rob (42:14.07)
Startup might invest the time to learn and do it but your average company that's buying GPUs to train common model tasks Are not going to be able to are not going be able to do it I also think related to that the sort of Michael Burry stuff about hey these guys aren't doing depreciation right for these chips. Well think about this Google's version one of their TPU and I think they're on version 7 maybe 6 maybe 7
all the teenagers out there love that I just said that. If you have teenage kids, you get that joke. And so their version one TPUs are still fully in use. It's like...
Jason Hiner (42:45.027)
That's right, six seven, there you go.
Rob (42:56.736)
six, seven, eight years later, and 100 % maxed out going all the time. And the stuff that we're doing and companies like us that are helping people use smaller models, build smaller language models and all that, that's what's gonna fill up the slower hardware, right? And the cheaper hardware and the older hardware. And so yeah, it's not gonna run the most cutting edge models, but...
Right now, I haven't seen numbers on this, so I'm making this up, but my guess is 90, 95 % of AI workloads go to cutting edge models and they don't have to. And I think when you're gonna look around in seven years, I think those cutting edge models are gonna run 10 or 15 % of AI workloads. So.
So I really believe that number one Nvidia is gonna be fine at least for the next year or two or three and I just think there's so much demand for this the the other mistake people make the mistake that a lot of the analysts make early on is they look around and they go well the whole worldwide software market's like Six hundred billion dollars or whatever the number is well, that's how can you know Nvidia be worth five trillion? And how can these how can AI all this infrastructure get so big? It's like no no, but like the labor market just in the United States is like nine or ten trillion dollars and
maybe probably three or four times that world. When you start eating into that, it makes total sense how you could have $4 trillion worth of software spend that is effectively AI eating into labor, making it all more efficient. I actually do not, at the time of recording this podcast in December, I don't think we're in a bubble. I think we are starting to be at the beginning of the bubbly phase. Yeah.
Jason Hiner (44:33.197)
Okay, it's just beginning, the bubbly stuff. Okay, good, that brings me to my next favorite topic that I want to ask you about, which is the massive data center build out, right? And really this is being, it's...
Open AI is the headliner for it, but it's not just open AI, right? Like the EU just invested a bunch, like they're gonna build, they're gonna essentially try to 6X the current amount of data center capacity in all of EU, which isn't a lot to be clear, but they're far behind the US and China. But there's also lots of other companies and...
Rob (44:50.845)
Yeah.
Jason Hiner (45:13.387)
Yeah, and even startups that are trying to do the, know, Neo clouds and other things. So I just.
I feel in my bones that the capacity build out feels premature because what if all of sudden, not all of a sudden, but if over time you optimize these workloads and you don't need as much and we build out all this capacity, what does that mean? So I would love your take on the data center build out.
Rob (45:45.939)
It's a really challenging question because these things happen on such different time scales, right? You have to get power to these places. And that takes longer to build than a data center, like a new power source. And the data center takes a while and the chip takes a while in there. But you get progressively faster as you go down that path. And so you have these competing things where I think the power consumption is most behind. That's the biggest problem. heard, I was at a conference in San Jose, the Infrastructure Conference.
back in September, October of this year. And I think it was the Facebook head of infrastructure basically said, there's almost no places left to go get a gigawatt of power, which is what you really need to run one these big data centers. And so that's gonna be the main constraint. And I think that build out when people talk about nuclear power and adding all kinds of more, I think that's gonna be beneficial because it's gonna happen slow enough that we're.
we're gonna be happy about it even if there's some bumps along the road. The data centers themselves, I think, be a little bit, have some mild bubbles, ups and downs. I do think it's the right trend. We might be building a little too fast, but we are gonna need a lot of them. But I would expect to see more ups and downs in that over the next five to seven years than in the power market.
And then obviously the chips are going to really keep going for a while. But I think what will happen is that that curve that looks like this for your power needs will just start to shift down when you look at the stuff that Neurometric and other companies are doing around small language models. people will realize, wow, we can scale to double the amount of... So the metric people look at is intelligence per watt.
And if you look at intelligence per watt, that is going to get more efficient because it's getting more efficient at the chip level, at the rack level, at the data center level, you know, all that. So, and companies like us are going to help with that. And so that, that curve is going to dampen slowly. And if, know, to double the amount of intelligence you need in the world in AI, you're not going to have to double everything else. You're going to be able to, you know, maybe use 60 % more power and 45 % more data centers. And so I,
Rob (48:01.654)
I think those things will probably catch up, probably not next year, but probably 2027. You'll start to see how those things are realigning.
Jason Hiner (48:11.053)
Okay, well as I'm fishing for hot takes, another one I wanna ask about is open AI versus anthropic because you understand business, finance, startups, all of these things and also you understand the AI ecosystem so well. It seems to me, my take is that they are just taking such diametrically opposed paths of building.
Rob (48:17.65)
Yeah.
Rob (48:34.184)
Yeah.
Jason Hiner (48:36.557)
And Anthropic is building this much more enterprise business, there's a lot of money there, is sort of slower, more methodical, and they're trying to build a profitable company over time. And then OpenAI is doing this, sort of signing crazy deals, like making amazing commitments for data center capacity, almost like it's going out of style, which, like you said, it's constrained, there's a there there.
Rob (49:03.187)
Yeah.
Jason Hiner (49:06.711)
on a consumer market by and large, that there's gonna be a consumer market for AI, which today is still, I mean, there's a bunch of people paying for it, so there is a there there. It's not like nothing, it's not pets.com or something 20 years ago, 25 years ago. But it is a really different path between those two.
Rob (49:26.334)
Yeah.
Jason Hiner (49:31.957)
What's your take on that? How close to the mark is that? And what do you think?
Rob (49:38.133)
I was at an AI dinner put on by a New York based tech bank maybe seven months ago and there was like 20 of us in the room, all AI people and they went on and they said, what's your hot take? And most people don't actually wanna give a hot take cause they're embarrassed or whatever. It got to me, I was like person seven and I said, I would short open AI if there was a way to do it.
Jason Hiner (49:55.149)
You
Wow.
Rob (49:59.573)
And two or three people were like, yes. And here's what, and other people like, no, they're gonna win everything. So it was, and the guy came up to me after he's like, this is why I invite you to the dinners. It's like, it was amazing conversation, right? And really good discussion and debate. think the way I would frame it is open AI is taking the riskiest approach and they're trying to get their hooks in all parts of the ecosystem. And that's very expensive at this stage. And as a result,
Jason Hiner (50:08.661)
Yes.
Rob (50:25.702)
they can't have a misstep. If they have a misstep, which it already looks like they might have with Altman's code red thing, things could come crashing down and they could end up being the MySpace to Facebook or whatever. now Microsoft owns a big chunk of them. It was roughly half. think it's now it's like 25 or 27 % after the conversion rate. And so I'm sure Microsoft would buy it or some other people or maybe Oracle would try to buy them.
Jason Hiner (50:41.613)
27%, yeah, yep.
Rob (50:51.668)
Like I think OpenAI would live on, is what I would say. they're gonna have to take a more anthropic, like, they've looked at this as a race. And I think what we've learned that surprised everybody, myself included, is that I don't think it's a race as much as people think. If you had asked me two or three years ago, I would have...
about AGI, I would have been one of these people that was like, man, if you get there a week before somebody else, then your model can improve itself faster. then you run away with the world and you own everything. I don't believe that now for a whole bunch of reasons. And I think...
I think we've seen that, right? When chat GPT came out, everybody thought game over, open AI won. And then lots of people have caught up that the Chinese open source models have passed us in a lot of, you know, the US ones in a lot of levels of performance, Anthropics clearly caught up. Those are very good models, Gemini's in the lead right now. Google's caught up. Google has the assets and is so deeply vertically integrated on AI and has the talent.
that the people I know that have left OpenAI to go work for Google will say, but Google's got 1,000 times more compute than OpenAI has. Like, let's say OpenAI solves the AGI problem first, and their smart AGI is trying to train better models of itself. If Google can figure that out in a matter of weeks, it can spin up way more compute and do the same thing and surpass them again. So I think OpenAI is starting to realize they were playing the wrong game.
And so the question is, now can they recover? But yeah, think I'm, I suppose if they were going to IPO, I would not buy the stock at the current valuations. And that's just me. And I actually, I will tell you that I personally actually shorted Oracle for a while when the announcement came out with Oracle and OpenAI's thing, because my thought was if OpenAI stumbles and people worry about their commitments, Oracle's the one that's going to have the hardest, that's the one that'll have the hardest time dealing with. And so that was that.
Jason Hiner (52:42.274)
Yeah.
Rob (52:52.694)
That turned out to be a good idea.
Jason Hiner (52:54.893)
It did, it did. Interesting, well you did not fail to deliver on the hot take, much appreciated. Since, let's go to the other end of the spectrum though Robin, talk about AI startups because you looked at them as a VC, you understand and are in the ecosystem, you know lots of other founders, you've been in it for a decade. As you look around and you see other founders doing things and starting things,
Rob (53:01.682)
Yeah.
Jason Hiner (53:24.847)
things, what are you looking at and saying like those are good ideas and what are you looking at and saying like oof, that the risk there seems a little bit crazy and I don't necessarily understand what you're trying to do.
Rob (53:39.573)
I still write a newsletter.
a sub stack on investing in AI and I've written some things in the last year, Sequoia came out and said, the app layer is, where you're to make money with AI. And I basically wrote a post that said, is why I disagree with Sequoia's thesis. I think it's the infrastructure layer. So my concern, so first of all, I would say anything that I say about a specific category, there's always exceptions. There's always pockets to make money even in crappy industries and bad pockets. But as a whole, the two areas that I would not play would be.
the application layer is going to become more fragmented and more competitive. And the reason is because it's easier to write code. And so the calculation then for a buyer is, well, why do you buy CRM from Salesforce? Because it would take you forever to write your own CRM. So some really big companies with large teams will do it so you can get exactly what you want. But if you could write your own CRM in six weeks with a team of two or three people, which is what we're going to get to, would you still buy Salesforce? Maybe, maybe you need all the add-ons in ecosystem and your use cases are complicated or whatever.
But if you can get exactly what you want and just build it and AI coders can maintain it for you. So if that happens, you're going to start seeing a lot more competition at every application category. Which doesn't mean you can't make money, but it means you just have to make smaller companies that are funded less, you know, that are...
and run a certain way, and you're not going to see these big $10 billion publicly traded SaaS behemoths in the application space that you've seen. So that's why I'm much more positive on the infrastructure layer. then these agents, I think there will be some niche vertical agent use cases that make money. But by and large, I think the big companies are going to win that. It's a big problem in the AI space, is you need attention and distribution.
Rob (55:30.853)
is getting harder for everybody whatever business you're in because the world's noisier than it used to be AI is making it easier to be even noisier and noisier and so people have to adapt right this is why you came to the deep view because you guys are trying to give like you said earlier like people special like here's how to make sense of the noise so people are gonna come back to authenticity and all that but
But a lot of these things are gonna, that's an advantage to the large companies. You also need to be able to afford a lot of compute. You need to be able to get interesting data sets. You need to be able to hire AI talent. I mean, when I got into startups 15, 20 years ago, you could take, like a senior engineer might make $170,000 and you could probably, in 2008, 2007, and you could probably get that person.
to take a 20 or $30,000 pay cut to come work for you and take some equity. But now, as a startup, mean, some of those same types of people, if they work at Amazon or Google, might make $400, $500, $600,000. Plus, they've got a stock package on top of that. It's like...
they would be willing to take a pay cut to come work for a startup, they're not gonna take it. If you wanna move from like, I could make 400 instead of 500, that's your startup investors are gonna go, you can't pay somebody that at a startup. So that's also making it hard for startups. The talent and the pay that people have now is crazy in the AI space. So the big, big companies have a lot of advantages. And I think particularly in the agentic space,
Jason Hiner (56:44.479)
Yeah
Rob (56:59.38)
When you look at how do you compete against agent force from sales force, mean Amazon and Google, Microsoft has fully fledged agents and Amazon and Google have platforms that help you build them. Not that there aren't some opportunities, there's hundreds if not thousands of companies, startup companies building agents and I think that's gonna be difficult, difficult space.
Jason Hiner (57:23.489)
Yeah, okay. Speaking of spaces, let's talk about physical spaces a little bit. You also have this other thing that you do on the side in addition to your newsletter and this little startup that you're running, which is I'm sure you're taking a lot of time and you're about to launch and all that, or about to publicly. You do AI in NYC. You're based in New York, which is great. Sorry, AI in New York.
And I spent a lot of time in New York. I love the ecosystem, the tech ecosystem in New York. And obviously there's a lot of media stuff there too. So tell me about this. How did you get involved with that? What are you trying to do there? What's that look like?
Rob (58:07.56)
Well, the AI ecosystem in New York is fantastic, but it's very fragmented. And a lot of the parties don't know each other. know, the robotics people don't always, there's a group that come out of sort of data science, out of finance. And then there's your more traditional people and they come out of the big tech companies like Mongo and Datadog.
And I was, so I just had this podcast I did for fun and to meet people called AI innovators, had a couple thousand listeners and it just felt, you know, I started doing it five or six years ago and it just felt stale because there's a million people doing AI podcasts now. thought, well, how could I stand out? So the guy that was producing the podcast said, I think, I think you need something like the all in podcast, but for AI, where you talk about the news and you have a couple of hosts and you riff on each other. And I started thinking about it I was like, let's do that live in studio.
with people in New York. So let's just focus on New York. What's going on here because it's really the applied AI hub of the world, think compared to like, obviously there's a ton going on on the West Coast, but West Coast people are still very focused on AGI. So we poke fun at them sometimes on the show. I mean, I have a ton of friends out there, but like we poke fun at how obsessed they are with AGI and ASI. then there's...
Jason Hiner (59:07.095)
Yeah.
Rob (59:16.34)
You know, our show that's about to come out, we just had Doug Laughlin on from Semi Analysis, which is a big semiconductor newsletter. And he just moved to New York this summer and he's like, all of our customers are here in like a three mile radius in Manhattan. so it's a really, really great place to build. It's a great quality of life, particularly if you're young.
Jason Hiner (59:27.629)
here. Yeah.
Rob (59:36.885)
I know, I would want to raise young children here. It could be a little bit challenging, but everything else about New York is fantastic. so the show we have guests on, it's not just all tech startups or tech people. mean, we had people on that run a nonprofit that helps, run a group that helps nonprofits use AI.
in New York and there's people like that. gonna have our music for the theme show was written by the keyboardist for Vampire Weekend. So he and his, and I tried to create it with like, Suno and some of these AI tools. I just couldn't get what I wanted.
Jason Hiner (01:00:08.694)
you
Rob (01:00:09.492)
So he's gonna come on with his manager hopefully at some point and we're gonna talk about the impacts of AI on the music industry. So we're really trying to be the center hub of everything in the city, know, government AI, how does NYPD use AI, how do the New York Knicks use AI, those are the people that we're trying to get on and not just startups because there's just so much going on and I just think for a casual listener who may not listen to every podcast, you know, but wants to see what's going on and then we put some New York flavor in there.
ask a New York-centric question at the end, like what's your, I think last time was, what's your least favorite subway line? And then we post a lot of outtakes because we start filming when we arrive and start, we have all these random conversations that sometimes make it into the clips of the show.
Jason Hiner (01:00:56.173)
That's awesome. Sounds like a lot of fun. you've got multiple folks. How long have you been doing it? Where do people find it if they want to listen to it?
Rob (01:00:58.802)
It is.
Rob (01:01:05.266)
Yeah, we've been doing it for about three months now and it's good. We're getting about 20,000 listens across all platforms. Yeah, figuring out your podcast metrics. This is something somebody should fix with AI. It's a little bit hard because we're on Spotify, we're on Apple Podcasts, we do the video show on YouTube. You can watch it because we do record video. But it's just AI and NYC on any of those platforms and it'll normally come up.
Jason Hiner (01:01:20.385)
Yep. YouTube.
Jason Hiner (01:01:33.423)
That's awesome. How about, know, last, you know, Rob, let's talk about...
just the AI ecosystem in general, are you glad you jumped back into operating? You mentioned this is such a time and unique time. Obviously, for me, this is why I decided this is what I wanted to be thinking about and reading about and talking about all day long. So my thesis is pretty similar. How are you thinking about what the next few years look like in the AI ecosystem?
Rob (01:02:01.395)
Yeah.
Jason Hiner (01:02:07.472)
and yeah, how are you kind of calibrating for what's next?
Rob (01:02:12.286)
Well, I agree with you that it's going to be massive. It's going to be the place to be. There's still a ton of innovation coming. We're still in the, I think you said at beginning, 10 years, like I think we're probably 10 years away from starting to have a sense of what this is all going to look like at scale and when it's going to slow down. I do think it's going to slow down. And I do think at some point we will have to deal with the fact that I think we're a long way from AI taking a lot of the jobs, but I think that day is going to come. And part of the reason I wanted to come back in and be an operator as well is that there
There needs to be, I mean, you and I, you're still in the Midwest. I spent a lot of time there. The tech industry needs a little bit of Midwestern values sometimes, right? There used to be these jokes of, in the 2010s era, it's like, why are all the best and brightest computer scientists in the world at Google and Facebook trying to get us to click on more ads?
Jason Hiner (01:03:00.877)
Yeah.
Rob (01:03:01.536)
And now those minds get to tackle some more interesting problems. And I think you need a set of entrepreneurs and leaders and people who don't just, think tech has sometimes abdicated its role in morality and ethics by just saying like, well, you can use it however you want. Yeah, that's true. Right. You can do that. I can stab somebody with a
kitchen fork, but like, you know, you can use things anyway, but that doesn't mean that these things don't lend themselves to certain uses more than others. you, you know, I think we need more pragmatists at the helm. And I, you know, I'm hoping that we can not just...
Jason Hiner (01:03:24.566)
you
Rob (01:03:40.916)
have an awesome company and have a lot of fun and it's all intellectually stimulating. you know, be a force to help companies navigate and balance out against, you know, there's, there's a risk here that you end up with like three companies that matter in the whole world and they own the AI and do everything. And I think by, by supporting open source AI and small language models and helping companies be able to, keep their IP and easily make models for their IP and their knowledge workflows together. know, I think like, I don't know, it sounds cheesy, but there's like a little bit of good in that.
think. And so, yeah, it's just, mean, I'm having the time of my life, you know, my, you know, my, my kids are older. So I'm, I'm one those few people in their forties that like doesn't have young kids, because I got married and divorced so young. And so it's like, you know, I like I can work like I was in my early 20s again, and I am and I love it. And it's, it's super challenging. And so this is this is awesome time. Yeah.
Jason Hiner (01:04:35.468)
you want to be. Can't ask for much more than that in life, like to spend your time on the things that you want to spend the time on. Very good. Well, Rob, thanks so much for being on the show. Great catching up with you. Great talking to you, my friend. And yeah, we will talk again soon.
Rob (01:04:42.142)
Exactly.
Rob (01:04:51.784)
Yeah, thanks for having me.