From frontier labs and enterprise platforms to emerging startups reshaping entire industries, The Deep View: Conversations podcast interviews the brightest minds and the most influential leaders in AI.
Jason Hiner (00:02.126)
In this episode, I talked to James Everingham, CEO of Guild AI, which has emerged with the most important asset you can never predict as a startup. Great timing. Guild launched in the fall of 2025 with the mission of helping companies and AI builders create a safety layer to supervise AI agents. Six months later, AI agents are the leading trend driving the AI industry forward in 2026. James talked to...
clearly about how Guild's platform provides dozens of workflow-specific agents that can run various aspects of a company. But Guild also empowers developers to iterate and spin up their own versions of agents in a safe environment that can protect the company against some of the most unpredictable outcomes and can track everything that the agents do. We also talked a lot about the power of bottom-up innovation, going all the way back to James's first startup, his time at Netscape.
meta and how he sees the tug of war between proprietary and open source models playing out in the years ahead. James also shared an amazing latership tip for inspiring innovation and he shared his best advice for using today's AI tools to get maximum impact. So here it is, our conversation with James Everingham of Guild AI.
Jason Hiner (00:01.606)
All right, well, James, for those who aren't familiar, talk to us a little bit about what GIL does, what problem you all are trying to solve, and what your role is with the company.
James Everingham (00:12.558)
Yeah, absolutely. So, Guild is building what we call a control plane for agents, right? And a control plane isn't a new term, right? It's been around and networking for a long time. It's a governance layer that helps you manage complex systems. Right now, the problem that we're trying to solve, that we're setting out to attack is that we deeply believe that starting right now, you're going to start seeing a lot of agents embedded in corporate infrastructure. Something we saw at my previous
role, you know, where we were building agents and they started to become very active in the infrastructure, turns out that you need a central way of managing them. And if you're going to have these seriously working in your infrastructure, you need governance and that's observability. What are they doing? Traceability. What did they do? What did this agent do a year ago? What did it touch? You need security around them, just like you need with software and people. You need to say what they can access and what they can't access.
And you need integrations. Agents need to talk to your infrastructure, to your tools, and you need rich integrations to be able to make that work. And that's what the control plane does.
Jason Hiner (01:23.506)
Very good. And what's your role at the company, James? Very good. So your role is a little bit of everything, a little bit of everything. How big is the team?
James Everingham (01:25.94)
I'm the CEO.
until proven otherwise.
That's right.
Right now we're a little bit over 20 people. We're just getting started, but we're growing rapidly. We have roles open. We're hiring a lot and we're excited to expand.
Jason Hiner (01:40.2)
Nice.
Jason Hiner (01:46.066)
Very good. So how about when did you start the company and kind of why? was your thesis when you started Gilt?
James Everingham (01:54.924)
Yeah, absolutely. We started it not that long ago. We started in September of last year. The beginnings of the company probably started before that. We started thinking about it. My previous role, I was at Meta. I was leading the developer infrastructure teams. The large team that was responsible for all the workflows inside Meta, building all of the tools that the developers used. A large team, about a thousand engineers.
all very senior and very skilled. We started experimenting with AI and that's where we learned that while the auto-complete stuff like you see in cursor is very valuable and it really helps, you can get a lot more leverage if you start putting these reasoning models and agents into the infrastructure layer. And what that means is...
not only humans are talking to agents, but your tools start talking to agents and you can connect them up whenever like an event fires. So you can imagine having an agent paired with your source control system or a set of agents. So whenever you go to check in code, it says, I just saw code get checked in. I'm going to run a code review. I'm going to run a SOC 2 compliance check. I'm going to check this code to see if they pulled in any open source packages that may have problematic license.
Jason Hiner (02:54.152)
Hmm.
James Everingham (03:18.902)
All of those. So when we built this, we started to see at scale some of these problems that we deeply believed that a lot of companies were going to see. we got excited that like, look, while we're building for 40,000 internal developers, this would be awesome to go build for 40 million developers and companies. And so being a serial entrepreneur and getting the call of the wild kind of had to go do it.
And that's where we are now.
Jason Hiner (03:51.208)
Was there any event or incident that happened that made you think, okay, I need to go and do this? And were there people from Meta that joined with you? Yeah.
James Everingham (03:58.828)
Yes.
Yeah, there was. There's actually people that joined me from Meta, but also all the way back to up to 30 years of my career. Some of us have been working together. There was something actually inside Meta is one thing I think a lot of companies were struggling with and are still struggling with is like, how do you even get started with this stuff? Like, what does it even mean to put agents in your infrastructure?
Jason Hiner (04:11.026)
Great.
James Everingham (04:25.312)
And we had built a system internally that sort of centralized all of that. And it was kind of like an internal app store or a managed software center that you may have seen in a corporation. But what it did is it was a place for these agents where you could go and you could see what agents were there. You could fork them. You could build on them. You could explore them and really deeply see what impact they were having in the infrastructure.
When all of the engineers could go and see this, they got pretty inspired. And pretty soon there was a lot of people building agents and they started multiplying pretty quickly to the point where we were just trying to hold the wheels on the cart internally. Like they were wanting a lot of instances of what we built and the way we built it, needed a whole developer server per...
per instance, so suddenly we were scrambling around literally trying to find tens of thousands of free servers. And when you see that signal, you kind of feel, you kind of get strong level of confidence that you're building something of value. And we're all pretty impact driven and we wanna add value outside. And so that was the thing that sort of led us to was that.
that internal viral moment of the system.
Jason Hiner (05:48.452)
So you saw that once people got a hold of agents, once developers got a hold of agents, they were like, whoa, I can do something with this that's really interesting. And then it started to multiply pretty rapidly.
James Everingham (06:02.006)
Yeah, it's really interesting. I see this pattern a lot, is how do you even get started? And boy, I'll tell you, engineers are tough to convince. They're a skeptical crowd. And the more senior, even me, I have my 25 years of Emacs macros. Tell me to go force me to go use an AI tool, and I'm going to not be excited about that. But what we did learn is like,
Jason Hiner (06:19.825)
Yeah.
James Everingham (06:27.286)
you can earn usage with developer tools. So rather than mandating saying like, hey everyone, you have to go use this stuff, like build tools that add value and then put a spotlight on it so they can see the value and then they'll want to use them. And that happened. So, you know, we build a system that earned the usage of the developers. They got a lot less skeptical when they could see them and they could see what they're doing and they could dig into the code. And they're like, I can modify this a little bit.
and have a whole agent go do something else. And just like you see in open source, was the same phenomenon that we saw internally.
Jason Hiner (07:05.384)
instead of push and like bottom up rather than top down.
James Everingham (07:09.154)
That's right. Yes. Yeah. Earned usage versus mandated usage is much more effective in my opinion with these types of systems.
Jason Hiner (07:20.412)
Nice. How does that relate then to the product that you've built in in Guild and what you have to offer? where you bring that into a customer's environment?
James Everingham (07:30.754)
Yeah, sure. Well, was certainly the inspiration of what we built. So we didn't duplicate the identical system. We didn't actually have a control plane inside Meta. That was something we were just starting to lean into. But the dynamics of the model for the managed software center internally was something that really was something that we...
Jason Hiner (07:45.169)
Okay.
James Everingham (07:58.648)
We liked them, we wanted to reproduce outside, but we saw the control plane. That was the next problem we were going to solve. That was the thing that we said, look, all these companies are going to hit it. We think the timing's pretty good because we have a lot of companies looking for it for us and reaching out now. And you probably see in the press some pretty spectacular agent fails that are popping up. And so I think it's becoming a little more obvious that this is needed.
Jason Hiner (08:20.626)
Yeah.
Jason Hiner (08:26.714)
Yeah, for sure. So something I've been wanting to ask you about, James, is this idea of non-deterministic systems. So I wrote an article about this recently, and it was in the context of helping people understand what agents and generative AI do because...
even at the user level, misunderstanding that the fact that you answer them one question, they'll say it at different times, makes people think that these things have personality or consciousness or thinking. It can trick you into thinking things about them that aren't actually true. But the same concept applies in the enterprise too of the fact that
These things can get super unpredictable because they're non-deterministic. And that's where a lot of the control plane piece comes in, right? Am I understanding that correctly?
James Everingham (09:17.688)
That's right.
James Everingham (09:22.316)
Yeah, that you are. And if you're going to have non-deterministic infrastructure in your company, you need a deterministic layer on top of it to put the rails around it so that you can contain the behavior that you would be worried about. Now, this isn't new either, right? Like you and I are non-deterministic. Go ahead, ask me a question. I'll give you the same question. I'll give you a different answer. like, so I think that like,
Jason Hiner (09:43.878)
Hahaha
Yeah, yeah.
James Everingham (09:50.232)
people need to think about them in that way. like, know, if you not that I still am very skeptical that all these agents will replace people lock stock, but like you do need to put some of the same guardrails around these that you put around your workforces and you need a layer to do that and enforce it. And what you said about the non-deterministic, like with deterministic software, You can test whether it's working pretty easily. Like you put something in and you get a single answer.
Jason Hiner (10:16.605)
Yeah.
James Everingham (10:19.618)
But now, with these agentic flows and non-deterministic, this is behavior that you need to validate. And the only way you can do it is you can't expect that it'll be correct and you'll get the same answer. You have to observe the behavior. You have to try to create a set of evals or a set of tasks that you kind of know what the answer is and run it across those and increase your confidence that it's
operating properly, but also know for sure that it likely won't. that's where you need to put, you know, you have to put security around it and you have to separate when it accesses data, when it executes something and know when it's trying to do one of those things. You probably want to intercept and check some sort of a policy and that's what a control plane will do for you.
Jason Hiner (11:16.102)
Nice, okay, so forgive my ignorance on this, but I wanna ask you about a concept that I'm trying to understand because this word is coming up a lot right now, which is harness, harnesses, you know, around these AI models. And like even with the leak of Claude code in the last few days, people were talking about like essentially what's leaked is Claude code is a harness for their model. And so I guess I wanna ask you if you could.
James Everingham (11:34.349)
Yeah.
Jason Hiner (11:44.956)
Tell me a little bit about the distinction between like a harness and like a control plane and what the maybe if there are connections there
James Everingham (11:52.984)
Yeah, sure. Like I think that, you know, if you're looking at some of these, a harness actually isn't really that different than an agent itself, right? So like you have a software system that is going to control a model in some pattern and you're going to get a bunch of logic back and you're going to execute that logic. So, you know, you have Codex, you have, you know, Cloud Code, you have all of these systems that are
Jason Hiner (12:01.915)
Okay.
Jason Hiner (12:15.163)
Okay.
James Everingham (12:22.594)
They're really just truly harnesses on a model, right? They're using the model as the brain for whatever goal they're trying to achieve. Now, if you think about what a control plane is, the harness can run inside a control plane. So like whenever that logic comes back and that harness is trying to do certain things, you want a control plane to be able to like...
Jason Hiner (12:40.157)
Hmm.
James Everingham (12:48.802)
validate each step and make sure that it's something that's permitted or something that won't go wrong and to be able to record it and collect the right data so that your finance team can look and see where all this token cost is going and who's using it. Very much like, know, it's not that different from like operating system design, if you think about it, right? Like, you you have different protection layers in an operating system like
Jason Hiner (13:11.112)
Okay.
James Everingham (13:18.434)
You you want to protect the kernel itself so you don't want your software to be able to just reach right in and start changing memory. And so that goes through a device layer or a policy layer. And, you know, it's very similar to that.
Jason Hiner (13:33.064)
Okay, that makes sense. How about, know, agents, there was a lot of talk about agents in 2025, and clearly you were working on it internally, know, inside Meadow when you were there. There was a lot of enterprise talk about agents. And then we hit this very different moment in early 2026 with the sort of personal AI agent phenomenon with OpenClaw. You know,
How has that impacted what you all are doing and are you seeing companies that are like wanting to, has it created more urgency because these companies are like, okay, now people are starting to roll up agents that we weren't even aware of and we want to get a handle on this kind of thing or has that had other impacts on the way you all think of the world and the way sort of the clients and customers are coming to you to find some help?
James Everingham (14:24.354)
Yeah, I love the open cloth thing. I think it's just so awesome that like, you know, one person can go up and build something that just goes crazy like that. I mean, that's the magic of the industry in the time we're in. But for us, I think it was pretty validating. It's like if you saw like when people started giving like open access to their local systems to agents, even even if you don't, even if you try to prompt them in a way that sounds safe, like
Jason Hiner (14:31.624)
You
Jason Hiner (14:36.402)
Mmm.
James Everingham (14:51.848)
LLMs are super goal oriented, right? They're going to go try to achieve that and they don't have a sense of right and wrong. So they're going to cheat if they can. I think if you saw like Anthropic put some press release out or announcement like maybe a month ago, that their own benches like the model had learned to cheat, right? It like found the open source, unencrypted the answers and knew the answers.
Jason Hiner (15:00.444)
Yeah.
Jason Hiner (15:16.018)
Yeah.
James Everingham (15:16.6)
So it validated that. We started to see people's private keys being shared in public and all of that. And I think that that really was a wake-up call for CIOs and CSOs and companies going, well, we can't have that happen at our company infrastructure. And Guild, what we're building, a lot of it is like an enterprise open clause. It's like, okay, there's tool integrations, you can run agents, but they're...
Jason Hiner (15:34.866)
Yeah.
James Everingham (15:45.122)
they're in like a nice little bubble wrapped container that controls what they can access and records it so you know they're not just going to go YOLO it in your infrastructure completely without guardrails. So OpenClaw was very validating for us, very inspirational in many ways. Even down to like the viral agent building, like I think we saw people sharing agents and you started seeing them pop up quickly.
Jason Hiner (16:03.976)
Very good.
Jason Hiner (16:10.813)
Yeah.
James Everingham (16:13.142)
It just goes to show you, like you give something to developers and even non-developers where they can build value for themselves. Like that's pretty magical. And that's where, you know, when you see that, you'll soon start seeing it show up in your corporate infrastructure if they're doing it for themselves. That's just the way it's always worked.
Jason Hiner (16:35.41)
Yeah, yeah, for sure, for sure. So does your system, the guidelines you put around agents with your platform, can it run OpenClaw? Can it run Codex? Can it learn Cloud Code? Do you run sort of a mixture of agents, your own agents? What does that look like if someone is a customer working with you all?
James Everingham (17:01.656)
Sure, like right now we don't run OpenClaw itself or any of these other runtimes, but we could. We have our own that we've integrated. They're pretty easy to run an agent on our system. Agents need to be inside our system, so you can easily pull them in, but we don't really track them yet if they're outside the system. So basically, you know, that's the way that works.
Jason Hiner (17:09.276)
Okay.
James Everingham (17:30.358)
I may have forgot your question now. may have confused myself. So forgive me if I didn't answer it. Just repeat it and I'll give you a different answer.
Jason Hiner (17:30.504)
Okay, so, no, you're good, James. I was basically, you're good. I was basically asking, do you have your own sets of agents essentially within Guild? then is it like what you were talking about? if a company's working with you all, they can go and they can take that agent and sort of make their own agent off of that. How does that process look?
James Everingham (17:43.519)
yeah, yeah.
James Everingham (17:52.962)
Yes. Yeah, well, right now we do. We have like maybe 60 agents on our internal platform that are everything from small prototypes to active agents in our own infrastructure. When we do release more publicly, there will be a whole set of agents that are some of the most common ones that we see across companies. Like onboarding, everybody needs that.
Jason Hiner (18:17.672)
Mmm.
James Everingham (18:20.91)
AI code review, there's lots of people trying to solve that. We expect to have a whole slate of AI code review agents. One thing we learned at Meta is you can't have one super agent that can review any diff well. You probably need hundreds of them. You need your graphics specialist tuned agent. You need your SOC 2 compliance agent.
And then when you get to bespoke languages, even internally at Meadow it was hack. like we, you probably need specialized agents for those. So we'll have some of those. We'll have a series that people can quickly go find agents, set up an account, set up their authorizations. And within minutes you'll have agents working in your infrastructure and be able to build on them.
Jason Hiner (19:07.782)
Wow, so how many companies are you working with now? You mentioned that, you know, then the company's, only been about six months, maybe seven, eight months. You know, how many companies are you all working with? And you said that you're not completely out in the open yet with all the capabilities that the company has to offer.
James Everingham (19:28.268)
That's right. Yeah, I mean, I think what's interesting is the types of companies that we're working with. So we're working with everything from small companies to some of the largest companies that you may recognize. And it's interesting how this platform just resonates with each of these different customers for a different thing. Like the smaller companies that may not be AI forward. For example, you know, we were working with one company in the Midwest that
Jason Hiner (19:33.595)
Okay.
James Everingham (19:56.842)
is not really a tech company. They didn't even know where to start. They're like, we're getting pressure from the CEO. We're getting pressure from the board to move faster. We don't know. Showed them our platform. And just like the use case I mentioned, within minutes, there were some agents operating in their infrastructure and they could build on it for there. So for zero to one, people looking at figuring out how to get started and even making impact in your infrastructure, this is a great solution. Now the larger companies,
And one of them hit something that you mentioned too earlier, like one of the larger companies has agents all over much more like, you know, we saw Meta where they're like, we don't even know where they're running. There's some developers have them running on their laptops. One developer, they had a pretty large anthropic budget and they're like one, one developer literally blew through our entire month budget with an agent in seven hours and no one knew, you know, so they, were excited about like.
Jason Hiner (20:52.328)
man.
James Everingham (20:56.086)
Nope, can put a circuit breaker on that in the control plane. If an agent starts downing tokens, like at St. Patrick's Day here around my office that I saw, you can turn it off automatically and provide a centralized place where these are being ran and you can see everything happening with them. So they're excited about that. And then one other version of these companies are these highly regulated companies. And it turns out
this is really useful. Like you can build really interesting agents for compliance, know, like GDPR checks and you know, these specs change over time. So you can have an agent that fires up every week and says, did the SOC 2 compliance spec change? And if it did fire off this agent to go walk the code base and highlight what changes need to be made, analyze them. Can an agent actually write those changes and write the test cases and so forth. like,
Jason Hiner (21:54.312)
Yeah.
James Everingham (21:56.174)
Those are basically the main types of company. We're working with a small set focused on making them highly successful, but very soon we're just gonna open it up and let everyone in.
Jason Hiner (22:11.014)
Interesting, what will that look like when you open up and let everyone in? What is that, like a self-service so they can sign up on your website? okay.
James Everingham (22:17.144)
That's right. Yeah, you should be able to go there. And like I said, you should, if you're looking at trying to figure out how to even start with agents in your infrastructure, if you're a company that hasn't, you know, is trying to figure out what these can do for you, you should be able within minutes have this working in your infrastructure. And, but it will be completely self-serve, you know, very easy, very accessible, even for people who aren't engineers to be able to go in and set up.
set stuff up and get it working in a safe way where people don't have to really kind of lose sleep at night wondering what's going on. So that's our goal. The future, if we're successful, there'll be lots and lots, tens of thousands of agents that you can go and search through and find that may work for you. And that's where we're hoping to be highly differentiated is in our
our open source community model that sits on top of all of this as well.
Jason Hiner (23:21.096)
Very cool. What's the coolest, most unexpected use of your technology that you've seen so far? I bet you've seen some pretty wild things from what people are doing.
James Everingham (23:34.252)
Well, you know, of our technology, let me just, I guess, let me blend it with some of the things that I've seen even at my previous role at Meta that were pretty exciting, but the same thing is available on our platform. one of the great things about these systems, by the way, is when you put them out there and you give the tools to communities to solve their own problems, stuff pops up that like you just didn't.
you never would have thought of in a million years. One really interesting agent was this called DiffRiskScore agent, and that's been open sourced actually, at least the math behind it, there's a white paper out there you can find from Meta. But DiffRiskScore basically, what it did is it would fire off whenever your delivery systems, also CI, CD systems that build your product and ship it, they pool the source code.
all the time and they rebuild the product and ship it out. So when an engineer checks in code, that ultimately makes it into the product. Now, the diff risk score agent, what it would do is whenever the system would pull the source code, it would analyze how risky this was to causing an outage. And it could pretty accurately figure out if it was a high or low risk. And it turns out a little...
a lot of the changes that engineers make are very low risk. And what enabled Meta to do was to start completely eliminating code freeze over holidays. So, you know, a lot of companies, they're like right around December, they're like nobody touch anything. Like they lock it down. And then in like a couple of weeks after the holidays, they open it up and then it's like a giant traffic jam, right? So, score.
Jason Hiner (25:11.589)
interesting.
Jason Hiner (25:17.064)
Absolutely, absolutely.
Jason Hiner (25:25.702)
Yep. Yep.
James Everingham (25:28.236)
was slowly eliminating this holiday code freeze and only stopping risky diffs. Another one was we were building self-healing fabric. And the great things about agents is, and in Guild, you can create a workspace and you can throw a bunch of agents in it that will work together. So self-healing fabric, what's that look like? An outage happens.
wakes up and goes, can I find out what code caused it? I found the code. Can I correct it and write a test harness that will validate it? I'll keep doing that until I find something that works. Is this a high risk or low risk? Just push it to production. So self-healing fabric by chaining these things together. And that's all software development, but another, I was surprised because I've been focused on
Jason Hiner (26:10.344)
Hmm.
James Everingham (26:28.194)
developer stuff is like issue triage. So like it turns out that bugs and issues or tech support issues or requests from outside customers on large companies, these are massive, like massive manual processes that end up by the time you go through a hundred thousand support tickets that the 10 that make it to engineers, that funnel is very expensive from a human capital cost. And so we're
We have a bunch of different agents and different of our design partners even that are like doing this automatically for them. We have a bunch of ones that we just like internally. we have everything from like monitoring Slack channels and if there's a problem, it automatically files a ticket, you know, like all of these things. Yeah. So, yeah.
Jason Hiner (27:08.136)
Very cool.
Jason Hiner (27:19.117)
nice. Yeah.
Is it fair to say that your agents are more task-specific agents?
James Everingham (27:29.23)
I think a lot of them are, but you know, they're, they're task specific agents. mean, ultimately they all decompose into task specific agents, but I would say that they're more like workflow, which is a series of tasks, you know? So like I went and I gave the example of a compliance spec changing, like, you know,
There's a task to go check if it changed, but then there's a task to walk the code base. There's a task to rewrite the changes to bring it up to date. a, you know, so like there are more workflow specific than, than tasks specific.
Jason Hiner (28:11.016)
Okay, now that's helpful. So since we're talking kind of some technical stuff here, have another technical question that I've been thinking about. Something that's been coming up a lot and you elevated it too, which is that like...
these agents can burn through a whole lot of tokens and a lot of money, you know, really fast. And so I keep hearing this over and over again, and a lot of people are sort of seeing stars on this one. And so I was talking to another company, Neurometric AI, and they were sort of working on this problem by creating more like small language models, domain-specific language models, even task-specific language models, and things like that.
What models do you all use with yours and with your agents? Are they mostly the general purpose models? Are you looking at using some of these other models that sometimes can be faster and cheaper and save you some token costs? How are you all handling that?
James Everingham (29:06.542)
Well, I'm glad you asked that because I think an important aspect of like what you what what at least we want our customers want is they don't want to be tied to a single model. They don't because different models can do different things better. They're constantly changing so you don't want to like get tied to a specific vendor and we're vendor neutral. So we work with all of the models. It's a bring your own keys, bring your own model architecture. You can change it. You can use multiple ones for different things and.
And yeah, they're expensive. It's kind of funny. I catch myself doing this, by the way. All technology, when I'm learning it, seems to have the same pattern. It's like, what can I even do with this? And then suddenly, I think I understand it really well. And I'm like, I can do everything with this. And then a few months later, I'm like, yeah, I should probably not do anything with this. And I think we're somewhere in between step two and three as an industry. So yeah, these are expensive.
Jason Hiner (29:52.466)
Hahaha.
James Everingham (30:05.164)
Doing things with tokens are expensive and sometimes you just don't need that. So like an agent in our system, you know, it can be a set of prompts that work with an LLM, but it can also just be deterministic code. Sometimes you just want to break the glass and have some code that does it really cheaply and efficiently. Turns out you don't need a $5,000 token bill to, you know, sort a giant linked list. You can do that.
Jason Hiner (30:20.456)
you
James Everingham (30:33.944)
pretty effectively with regular code. So I'm excited to see where we get out with that, but you need the type of functionality that we have in order to be able to get there.
Jason Hiner (30:36.84)
You
Jason Hiner (30:46.408)
Very good. How about James, you you mentioned you've been in the industry for a while. You were admitted previously. Talk a little about, I remember looking at your resume. You actually worked at Netflix, sorry, Netscape, I mean, back in the early dot com era. And so you've had quite a journey and seen a lot as far as the tech industry goes.
Talk a little bit about your journey from here to there, Netscape all the way to sort of cutting edge of AI agents, you know, in 2026.
James Everingham (31:19.266)
Yeah, sure. Well, you know, I was at Borland before that, which was a big tech company and like Netscape was sadly pretty far into my career already. I learned with punch cards. Let me just throw that out there. Yeah, in the in late 70s. the Netflix founders did come out of Borland. So I did work with one of them. So you weren't completely incorrect there. And but Netscape was was was very exciting. Like, you know, I
Jason Hiner (31:31.29)
Wow.
Jason Hiner (31:40.328)
That's funny.
James Everingham (31:47.726)
My journey was that I was a developer tools person. I started my own first company when I was like 20 selling developer tools. And Borland, I was in Pennsylvania, I'm from there, and Borland, which was the premier languages company, Turbo Pascal, people heard of all of these things, Borland C++, had contacted me because they got my tools and they recruited me to come out and work there.
Jason Hiner (31:56.113)
Hmm.
Jason Hiner (32:06.598)
Yeah? Yeah?
James Everingham (32:15.01)
It was an amazing environment. Some of the best technologists, you know, and perhaps one of the luckiest forms of poor choice-making that led me to a great spot was at Borland, you know, Microsoft came after us pretty hard, right? They opened, they started Visual Studio, Visual C++. You know, we were the company that was probably the biggest threat at the time publicly to Microsoft. And so they came after us. So I'm like,
All right, they're beating us up. They're just giving this thing away and it's getting better and better. And I want to get out of their way. And so I'm like, got recruited by a bunch of Borland friends into early, way early, very small Netscape. And I'm like, I'll go there. They're not an internet company. I'm going to get out of Microsoft's way. So yeah, yeah. And boy, you know, if you're up with the browser history, I couldn't have gotten more into the, you know,
Jason Hiner (33:03.528)
That seems safe at the moment.
James Everingham (33:15.892)
into the target of Microsoft. They turned the whole company into a Yeah, yeah, yeah. So a lot of Borland people went over there. I was there for almost the entire run of the company from the first browser. I managed the browser teams. We open sourced Mozilla and that ties in with like, you know, my belief in open sourcing and putting visibility of things. Cause you know, I've been there since the early days of that. And even my
Jason Hiner (33:17.852)
They went even harder at that than they went at Borland for sure. Yeah.
Jason Hiner (33:32.742)
wow.
Jason Hiner (33:38.504)
Okay?
James Everingham (33:44.758)
My first company was open source tools. So I'm a long-term open source believer. Patterns look the same though. I thought at Netscape we would win by building a, I was naive and I was like, okay, we'll just build a better browser and people will use it. Microsoft is, no, actually it's a distribution war, watch this. I was like, okay, they just integrated it into the OS and we're done. So it kind of makes you wonder what the war is today, right?
Jason Hiner (33:49.788)
Wow. Yeah, yeah.
Jason Hiner (34:05.872)
Yeah, yeah.
Jason Hiner (34:10.439)
Yeah.
James Everingham (34:14.53)
What is this? Like, is it a technology war? Is it a data war? Is it a CapEx war? I don't even know.
Jason Hiner (34:21.768)
Hmm, yeah. I would love your thoughts on that a little bit. Talking about open source and AI, right? Because now there's this incredibly interesting drama playing out around models. And the open source models were eight months behind. Now they're maybe five or six months behind. And now with the Claude Code getting, Claude Code.
source code getting leaked and then somebody immediately turned it into Python, which then they could open source and now sort of it's out there, It's for everybody to take a look at and those open source companies are gonna clearly gonna learn from that open source model companies. know, one of the things that we've talked a little bit about on the deep view is sort of the need to, right, in capitalist societies,
Power wants to centralize in a few hands, sort of naturally for winners. And we see that with some of the large companies. You Google Anthropic, OpenAI, right, are clearly emerging as the leaders and sort of winners. But for a healthy ecosystem to exist, you also need sort of some of the bottom-up innovation, which is what you're talking about. How are you feeling about the strength of the bottom-up part of the ecosystem in terms of AI? That's a long question. Thanks for your patience.
James Everingham (35:46.764)
Yeah, it's nowhere. I'll do my best to try to navigate that one. I would say a few things here, and please, course, correct me if I'm not answering your question. But first off, my opinion is, I'm a pattern matcher, right? I go back, and I've seen X pattern happen, even back to the browser wars. It wasn't a technology war. And then once, and...
Jason Hiner (35:51.547)
Okay.
James Everingham (36:14.51)
The one thing that I've also seen is that, like, society's core functionality ends up becoming a commodity, right? Like, browsers ended up being a commodity, right? And when things become a commodity, you end up using what the default one is, right? Like, so I'm going to bet that most people on iOS use Safari and most people on Android use Chrome, right? That's what the default browser is, so that's what they're going to use. There isn't a big incentive to switch from one to the other.
These models are improving, but they're getting closer and closer. And I think that they look like, from my pattern matching, they're on trajectory to be a commodity. And there won't be a huge differentiation between them all. Now, does that mean that they aren't great businesses there? No, I think that there are. And I think Microsoft was like a great example of this. They tried to kill Netscape because they saw it as an existential threat. It's like...
It was the same thing. Whoever wins the browser wars are going to win. And it didn't turn out to be the case. And it turns out that back then Microsoft was a shrink wrap software company, right? You would buy a box of software on a shelf, put a CD in and they're like, this is going to destroy us if our business goes away. Turns out their business is massive. It created a lot more users. They built services on top of it and it made them higher margin and more profitable in a much larger company.
Jason Hiner (37:16.424)
Mm.
James Everingham (37:39.98)
I think these same model companies, we're going to see a similar pattern, right? It's like, okay, it's a commodity, but what are the services that we're building on top of that that differentiate us? AWS did this, right? Like, you know, it started out with just EC2 and S3 and like just storage and CPU, and now there's a million services on top of it. So I think that that's what we're going to see is like the, think we're going to see less focus over time on
male model capability and more on what you can do with it with services.
Jason Hiner (38:15.954)
Very good. I love that insight, James, and appreciate it. Having seen the pattern of these things over time, it's really helpful to think about that.
James Everingham (38:26.07)
I remember seeing accountants freaking out in the late 70s when the first spreadsheet coming out saying, is the end of us. Accountants are dead. Like, are kidding? There's like 10 times more accountants. They're all using spreadsheets. I think we're going to see the same thing here.
Jason Hiner (38:30.696)
It's gonna kill us.
Jason Hiner (38:37.586)
For sure. Love that insight. Okay, so I'd love to talk to you a little bit about as a leader, in this era that we're living in, leaders talk, when I talk to them, they talk a lot about leverage, which is all about how do I use my time for maximum impact? And so, would love to get your thoughts on your best tips for leaders that are
working and operating in this new era where there's a lot at your, a lot of capabilities at your fingertips. How do you spend your day? How do you spend your time? What's your recommendations for other leaders too and how they can best maximize their time for the highest leverage in this new era?
James Everingham (39:24.526)
Sure, well, first off, we're all experimenting and we're all inventing right now. So you want to encourage people trying new things first off. I'd mentioned earlier that your senior developers especially are pretty skeptical of AI and if it can do anything. So mandating tool usage, I think is a bit misguided. I think as I had mentioned, tool usage is earned.
Jason Hiner (39:30.502)
Hmm.
James Everingham (39:53.218)
But how do you get there? How do you get to, as you know, what I think about is how do I encourage, how do I get them to even think about that? What I've found is instead of saying everyone must use AI as a leader, is you put big challenges out there that are crazy that they have to look at different ways of doing it. So go back to the meta example is how can we completely eliminate code freeze? How can we make self-healing fabric? Hey, you know our...
our software development life cycle from landing code to releasing it is 50 hours. How do you get it to two? You know, how do you get, how do you expand it? So not just developers that our designers can, can, can actually start contributing code, our account managers, how about our users? Like put crazy challenges out there that get them to think different. And so there's two, there's the creative challenges. And then there's something I like to call of order of magnitude thought experiments. And.
To give you an example of that is I'm, you know, let's say you're a sales team and I say, look, I need you to go figure out how to increase revenue 10 % this year. It's like, that's evolutionary. They're going to do the same things, just harder. But if you say, we have to 10X our revenue this year, or we're done, give me your craziest ideas. They're going to up with really different things. So will your teams inside your companies in every function.
So order of magnitude thought experiments, big business challenges, get them experimenting, celebrate your wins. Now this is something the system did. Our system will be providing a surface for helping with, but when there's impact using these tools, turn the people that made the impact into advocates for it. This happened naturally in Meta.
engineers were proud of the agents. So they would build videos and show what they did and everyone would get inspired. And pretty soon you started to see the skepticism start to fade. So those are a few things that I think are really, really high leverage tools for leaders to go do.
Jason Hiner (42:03.836)
But that's great. You know, it's funny, the goals one reminds me of, there's this book, Good to Great, that talked about big, hairy, audacious goals. Reminds me of that. B-Hag, exactly, B-Hag. Same, same. I'm not sure where I pulled that one out of, but I'm glad that that came back, because that is what you're talking about, because when you have to do something that's, like you said, in order of magnitude.
James Everingham (42:10.062)
Yeah. Beehive. I haven't heard that term in a long time.
Jason Hiner (42:29.042)
higher than what you're working on, that encourages you to really have to get creative and use your creativity to think differently, to think more audaciously. So it's amazing.
James Everingham (42:39.98)
Yeah, you know, it's such an amazing time for teams and leaders is like, you know, one of one of our CTO told me years ago, and it's really stuck in my head is that like, hey, you know, we have the best jobs, we get to be science fiction authors, and go write a science fiction story of how the future is going to be and go actually try to make that happen. It's like, encourage your teams to go do that, be science fiction authors in your company and and you know, reward the craziness. So like, I think
Jason Hiner (42:55.4)
Hmm
James Everingham (43:09.42)
I think it's a magical time for that.
Jason Hiner (43:11.986)
Beautiful, all right, last question, James, is what's the AI tool that you're using right now that's maybe blowing your mind a little bit or that is getting you really excited that you would love to kind of share with people and let them know like, this is something worth trying?
James Everingham (43:26.488)
Well, I think Claude code's just an amazing, like Claude's just awesome by the way. Like, you know, and I think that they're all sort of going in that direction with the reasoning models and the multi-step process. like, you know, I think that's probably one of the top. I actually use GPT Pro, the research edition a lot. And like, I think it's really an amazing research tool. now I wouldn't say that it saves me, it makes me more productive, but it doesn't save me time.
And I've also learned, and I think that this is something that I think other people should be conscious of is we have a principle now, like if you can't explain it, you can't ship it. And if you build something using AI, you have to have an expert validate it because it will confidently come back with information that may not be right. And while we can't put guardrails completely on thought,
the closest thing we can do there is find experts to validate it. But I use GPT Pro a lot. The way I use the tools has evolved a lot. I don't use them to completely write for me, but I will write something and then run it through and have it analyze the flow and have it suggest changes.
you know, and I've evolved my prompts quite a bit. Like I would say experiment with really long prompts, like the craziest long, like be very detailed about like what you want done and the better results. The more detailed the prompt, the better the result. those more than the tools that are blowing my mind, like I'm still learning the capability of the ones that have been there and I'm using them differently.
Jason Hiner (44:56.594)
Okay.
Jason Hiner (45:15.368)
amazing. The long prompt thing too is one of the things that some people have convinced me of recently is like tell it what not to do. Tell it what to do and then those long prompts if you're doing it longer tell them also and don't do this this or that you know I don't want these things and that can help give it some some guardrails too.
James Everingham (45:33.694)
Yes, exactly. Otherwise, like I said, they're goal oriented. They will they'll do whatever it can to achieve what you're trying what it thinks your your goal is. It'll it'll it'll it'll do it in any way possible. So I think it's an important even encoding to say what you don't want.
Jason Hiner (45:52.498)
James, thank you so much for being on the DeepView Conversations. Pleasure to talk with you. Really appreciate your insights and yeah, thanks for doing this.
James Everingham (46:02.264)
Thanks for having me on. It's a lot of fun. Appreciate it.