The Deep View: Conversations

What happens when AI agents outnumber humans in the enterprise? 

In this episode of The Deep View Conversations, Senior Reporter Sabrina Ortiz sits down with Jeetu Patel, President and Chief Product Officer at Cisco, to explore how the rise of AI agents is reshaping cybersecurity, software development, and enterprise strategy. 

Jeetu makes a bold prediction: today’s 150 million developers could expand to 3 billion agent builders within the next year. But that explosion comes with serious risk. As nation-states and bad actors deploy autonomous agents at scale, traditional, human-centered security models begin to break down. 

This conversation unpacks:
+ Why AI agents are becoming the new attack surface
+ What enterprises must do now to prepare for agent-driven threats
+ Jeetu’s journey from Box to Cisco, and what it taught him about leading through platform shifts
+ Practical advice for learning AI and building in an agent-first world 

But this isn't a doom-and-gloom conversation. Jeetu lays out a vision for how security can become an accelerator rather than a limiter, and why the distinction between giving agents access and giving them trusted, governed access will define which enterprises thrive in the agentic era. 

If AI agents are the next platform shift, cybersecurity may be the defining battleground. 

Watch the full conversation and subscribe for more interviews with the leaders shaping the future of AI. 

And don't forget to sign up for The Deep View daily newsletter. We don’t just cover AI, we decode it. In a world flooded with hype, we deliver sharp, no-nonsense insights to keep you ahead of the curve and help you put AI to work every day: subscribe.thedeepview.com 

Creators and Guests

Host
Sabrina Ortiz
Senior Reporter at The Deep View

What is The Deep View: Conversations?

From frontier labs and enterprise platforms to emerging startups reshaping entire industries, The Deep View: Conversations podcast interviews the brightest minds and the most influential leaders in AI.

Jason Hiner (00:01.582)
In this episode, senior reporter Sabrina Ortiz talks to Jeetu Patel, Cisco's president and chief product officer. Cisco's been transforming itself for the AI era over the past several years, and now it's gearing up to play a key role in cybersecurity as AI agents are running rampant in the enterprise. Jeetu emphasized that today's AI coding tools are going to multiply 150 million developers on GitHub to 3 billion agent builders over the next 6 to 12 months.

However, there's a huge question mark around what the coming wave of AI agents will mean for cybersecurity. Jeetu warns that nation states and bad actors could deploy billions of agents against critical infrastructure, leaving companies that don't have an agent plan defenseless, as the old human scale security model quickly becomes obsolete. He talks about how security can become an accelerator rather than a limiter, and how enterprises can distinguish between granting agents access

and granting them trusted, governed access. He also talks about his journey in the tech industry at Box and Cisco, and he shares his best advice for learning AI and leading teams in the new world being created by AI. So here it is, our conversation with Jeetu Patel of Cisco.

Sabrina Ortiz (00:01.14)
So Jeetu, you've been in the space way before AI blew up in popularity. How about you tell our audience a bit about yourself, how you got to this role, and what you do in the AI space?

Jeetu Patel (00:15.32)
So I run the product organization at Cisco. I've been here for about six years. Prior to here, I was at Box, Chief Product Officer there. And then before that, I was at EMC, where I built one of their first SaaS products. And prior to that, I ran my own kind of research think tank in Chicago. my first AI product I built was at Box about two

eight years ago. and it was called Box Skills. we essentially, you know, kind of the whole thesis was at some point in time, machines are going to get smart enough to be able to actually work on our behalf. But in all honesty, until Transformers and language models came out, the potential had not unlocked nearly

to the degree that we had to. My first acquisition at Cisco when I joined six years ago was a product called Babel Labs, which was a sound and acoustics engineering based product that was largely focused on taking out noise from a WebEx meeting. So if you have a lawn mower in the background, you're in a contact center and there's a lot of noise of other people and how you kind of amplify your voice and suppress the others.

any kind of background noise, know, blenders and, you know, during COVID, when people worked from home, there was a lot of kind of, had to come across professional on the far end. That was the first use case that we tried to tackle at Cisco. And then since then, machine learning was, you know, really an area that we had invested in heavily, but the real unlock happened with the chat GPT moment, like with everyone else. And then we, you know,

I still remember I was, you know, we pivoted the entire company to saying, let's make sure that we go pretty hard on AI at that point in time. Now we were, we were investing in AI quite diligently. We had spent, you know, a couple billion dollars on AI even prior to this, but this was one of those seminal moments where it became a company defining moment. We were going to provide AI infrastructure. We were going to provide security for AI.

Jeetu Patel (02:39.342)
provide observability for AI. And every one of our products was going to be built using AI. Every one of our products would be marketed using AI. that became a company-wide initiative. And I think it's been fantastic so far. Now, I do feel like we're in the second major phase of AI. So the first phase was chatbots that intelligently answered our questions. I think my...

I tell people like my personal learning would have been nowhere near as effective as I took over the job of running all products at Cisco. If AI weren't there, there's zero chance I'd be able to get my job done in the pace of time that I could have gotten it done if it weren't for AI. But as we go to the second phase of AI, Sabrina, it's agents that are going to conduct tasks and jobs for us almost fully autonomously. And these agents aren't tools that are kind of.

Sabrina Ortiz (03:22.612)
cute?

Jeetu Patel (03:36.27)
enhancing our productivity. These agents will be like digital coworkers that get augmented to our teams that you can delegate entire jobs and tasks to to say, go take care of this and come back to me when you're done. any decision that I feel comfortable over time having you make, just go ahead and make those decisions and just come back to me with the finished product. And there might be a few things that I might oversee. And by the way, you as an agent, if you feel like there's something you need my input in, please come back to me.

entire kind of mental model completely unlocks productivity where you'll be able to solve problems you were never able to solve before. You'll be able to tackle an amount of throughput that you were never able to tackle before. So I do feel like it's a very special time in the industry, but it is also a time where these agents could go rogue. Agents could get manipulated by the external world of attackers.

And before you know it, unintended consequences could occur. So we have to be very, very thoughtful about how we go about ensuring safety and security of these agents as well.

Sabrina Ortiz (04:40.724)
Yeah, I would love to double click on that pivot that you mentioned Cisco made. Because as I see it, Cisco right now has a really pivotal role in making sure that people understand and enterprises understand how to employ AI use cases, applications, tools in a way that is both safe and secure for them. But how did Cisco kind of find that niche but very necessary?

role in the space or in the whole entire AI ecosystem.

Jeetu Patel (05:11.63)
I mean, look, we are an infrastructure company. What happened was if you take a step back and say, what became, impediments for AI, like that's the way to think about this. If you assume that there's very quickly after chat GPT was launched and there was a level of proliferation of the number of people around the world that started using it, there was no doubt in people's minds that

this transformer architecture and language models and the ability to converse with a machine and human language and be able to get a response back in natural form had a lot of power. But what we had to really think about is what role did we want to play in this? And the reality is, is you have to look at the constraints. The first constraint.

is infrastructure. just don't have enough compute, power. And when I say power, mean electricity, just like sheer power to fuel the compute and network bandwidth to satiate the needs of AI. Like I think we are grossly underestimated. So I estimated right now that globally there might be like a $5 trillion build out of data centers throughout the world. I think if you look at a seven to 10 year window,

we are grossly underestimating the amount of build out that's going to be required to just fulfill the needs of AI. So that's the first constraint is infrastructure. The second big constraint is trust. If you don't trust these systems, you're not going to use them. so, you know, safety and security and, know, making sure that these non-deterministic systems work in a way that are actually not going to be surprising us in ways we don't want to be surprised is super important. And so this notion of delegation of

work to agents versus trusted delegation or work to agents. That difference between just simple delegation and trusted delegation is equivalent to the difference of going bankrupt or being a market leader. It'll be that stark, right? And so that's the second big area is this notion of, you know, trust. There's a trust deficit. We have to kind of bridge that trust deficit. And then the third area is a context gap.

Jeetu Patel (07:33.56)
And what I mean by a context gap is when you and I interact in the world on a daily basis and the older we get and the wiser we get, what we're getting is a lot of context from the world and insight in hearing and language and sensory touch and smell. And all of that context allows us to operate in a very efficient way as humans. Agents need that same enrichment of context.

And if you don't give it the right context, that doesn't mean that they're not going to make decisions. They'll just make decisions in an uninformed way. And so we have to create the apparatus for enrichment of context for agents so that we can make sure that the agents have all of the necessary context needed to make a decision in the best way possible in the interest of humans. And so those are the three major kind of constraints. And Cisco decided early on, hey, we need to participate in all of those three.

Because it turns out we've got underlying technology for all of them. And so what we've done essentially is built out great high performance, low latency, power efficient networking for within the data center where the GPUs get connected and also across data centers, hundreds of kilometers apart. We've built out a security and safety apparatus.

And then we've built out this kind of data platform where you can enrich context with the, agents, especially for machine data. And so if we can do that. Where you would actually unlock some of the constraints and that's so how I ask customers to think about Cisco is think about us as a. Critical infrastructure for the AI era. We are like the picks and shovels company during the gold rush where we provide the necessary tooling so that people can actually get the most from that, from their AI efforts.

Sabrina Ortiz (09:28.426)
I like that analogy a lot because it really does help, I guess, paint a picture of what Cisco is currently doing. I think another good example of that is the LLM Security Leaderboard that you just released, because it really tackles that, again, the trust issue. And it's less of a tool in a way that people might imagine, like again, something in the back end that might not be as tangible. This is something everybody could easily access, take a look at, understand.

what their favorite models are, how they're performing. Is that something you also consider just kind of making it more accessible for everybody to understand these bigger problems?

Jeetu Patel (10:09.816)
Yeah, I think it's a very important thing to keep in mind, which is these, you know, these agents are moving at a very fast pace. They tend to be non-deterministic in behavior. And these models tend to be non-deterministic that the agents are built on top of, which means that they're going to, if you ask the same question twice, you might get a different answer. And when you're building predictable systems for large enterprises on a non-deterministic platform, you have to make sure that you're actually

validating that model, have visibility in what's actually enriching that model from a training standpoint, validating the model, and then providing runtime enforcement guardrails when that validation doesn't seem like it's going the way that you want it to go. And so that has to be done at the model level and that also has to be done for the agent, right? And what we've done with this LLM leaderboard tool is

just be able to say, these are all the models. And when we ran our algorithmic red teaming exercise on these models, here's how these models performed. What was the attack success rate? How many times were we able to jailbreak the model? What were we able to do with that? Could we then put runtime dynamic guardrails for enforcement so that you didn't have the model just go rogue? Because what you have to do is you have to protect these agents from the world.

that try to manipulate the model in a certain way by influencing it, by feeding it certain data, just like you would a two-year-old. If a two-year-old was put in front of two entirely different kinds of people, one who was a really kind, nice human being, and the other one who was actually not that way, you would see a very different level of influence that would influence that two-year-old. You need to do the same thing with agents and make sure that these agents are like teenagers.

they don't have a sense of right from wrong. You have to make sure that you actually provide them that guidance. So you have to protect the agent from the world. You have to protect the world from agents when they go rogue. And then you have to make sure that you have some kind of mechanism that assumes that these agents are going to be used by bad actors and adversaries to do harm to us. And when we do that, we need to have machine scale defenses and we need to have detection, investigation and response and remediation of those breaches.

Jeetu Patel (12:34.53)
that might happen via an agent to be responded to at machine scale. And so those are all kind of important dynamics to keep in mind. And so what you were saying about the LLM leaderboard, the goal over there is when we go and share with customers that, you need to make sure that you secure AI itself, the first question customers ask us is, let me see how well it works. Well, here you go. Here's a leaderboard. tells you what you have. And then we also provide

you know, kind of a free explorer tool that says, ahead and try it out and see what, see what it tells you. And then that gives you at least a flavor rather than just a PowerPoint presentation.

Sabrina Ortiz (13:13.802)
Yeah, that's super useful too, because I think people also rely on the model cards or the system cards or what the companies are putting out about their own models. there's a clearly inherent bias there, right? there's some, even if they're running the benchmarks, there's still some things that they might leave out that this Learboard or third party evaluations in general can do a better job at. how do you, I guess, explain that?

Jeetu Patel (13:22.328)
Yes.

Jeetu Patel (13:40.61)
I actually don't even think it's that. Like Sabrina, I don't even think it is a bias by the model providers. What you have is a model provider is going to be really good at protecting their model. But we live in a multi-model world. You're going to have many, many different models with varying degrees of emphasis on security. And you want to have the flexibility as a company to be able to use a multitude of these models because of one thing that we've learned in the past three years.

is who you thought was going to be a leader and who became a leader next week tend to be very different. Like everyone's going to leapfrogging each other every single week. And so what you want to do is you want to have a common substrate for security that goes across all models, all agents, all applications, all clouds. And that requires a neutral party that can do that. And we want to do it for open source models and closed source models. We want to do it for models that

that show you provenance that says, which models are you using? Was this distilled from another model? If it was distilled from another model that happened to be from China, is that a concern for your organization? And does that violate any policy for you? And if that is the case, then having that provenance in the model is pretty important. And so there's a bunch of this apparatus that needs to get built out so that we just provide a level of comfort

and tooling for the CISO and the CIO and the developer to say, when you use AI to build, because we have 150 million GitHub developers in the world right now, but we're going to have probably 3 billion builders that build agents in the next 6 to 12 months. How do we make sure that those 3 billion builders are building this thing safely and securely? That's what we provide in the underlying infrastructure.

Sabrina Ortiz (15:39.504)
You just put out a piece about reimagining the AI or the agentic workforce. And I thought it was very interesting because if you really think about it, as you mentioned before, it's really like a team of people, except that this time it's agents that act autonomously but don't have the same thinking or processing or reasoning that humans have, right? So their actions might be a bit less predictable or might have more guardrails that you need to put around them. What are some of the biggest

ways or biggest things that you're seeing and how immediately people should be reimagining how they're using these tools and how they interact with AI.

Jeetu Patel (16:19.534)
So firstly, it's important to understand what constitutes an agent. When we say we're in the world of agents, what is an agent? An agent is a piece of software that behaves like a digital coworker that can essentially act autonomously, think, reason, and conduct tasks and jobs in its entirety if need be.

and most importantly is going to have access to your tools and systems. If you don't give an agent access to your tools and systems, it's not really quite an agent. You have to give it access to something that it acts on your behalf for. And so if you say that's the definition of an agent, it's extremely important for us to then say, if agents are going to do those kinds of things, we need to make sure that we...

Think about like the maturity level of these agents. Well, right now, where we are in the history of the evolution of agents, they are truly like teenagers, which means that I have a 15 year old daughter and there, you know, she is supremely intelligent. She believes she actually has figured out the world, right? Which is great in some ways, but she has no fear of consequence. And

Her understanding of right and wrong is something that we, as parents, have had to make sure that we instill in her so that when she is doing something right, you reinforce it. When she's doing something wrong, you reinforce why it's wrong. And that is kind of how agents are. They take things very literally and they can conduct tasks and actions

without a fear of consequence and have no, sometimes you can't undo what they did because if you give them access to your bank account, it could charge you $40,000 in the bank account and spend it and you have no way to go out and revert that. And so what you have to do is you have to then construct a set of guardrails to say, how is this agent behaving? How can we see when there's drift in the behavior of that agent?

Jeetu Patel (18:41.28)
And before that agent starts doing something harmful, can you intercept it? And that's what needs to happen with these agents. that's kind of, in my mind, the tricky part, which is over the course of the next few years, billions and trillions of these agents will be around. For every human, you might have 100 to 1,000 agents. These agents are working around the clock, 7 by 24. And humans will basically have a job to keep these agents working on our behalf.

all the time. They don't need to go home, they don't need to spend time with their family, they don't get tired, they don't get sick, they're just working. And so the more we keep them working, the better off we're going to be. But in order to keep them working, we have to make sure that they work within a set of confines and guardrails.

Sabrina Ortiz (19:25.652)
we just acknowledge the the inherent risk of having agents, right? They're accessing what you would access so that they could do the work for you. What do you advise customers or enterprise leaders who are for that reason not wanting to give them access to their data, but then at the same time not seeing results or any ROI in their AI investments?

Jeetu Patel (19:47.406)
So we've talked about a lot of the risks. Let's talk about the dynamic of what is happening right now, because I think there's only gonna be two kinds of companies in the world. Companies that are very dexterous with the use of AI and companies that are gonna really struggle for relevance. And let me tell you why that's the case. I think there'll be, if you think about the kinds of people you have within your organization, the people that are fluent in AI versus

have no fluency in AI, the differential in productivity is not going to be 20%. It's going to be like 50x, 100x, know, so like a couple orders of magnitude per person. Imagine if you, Sabrina, could do things that you're doing right now, but a hundred times more in that same window of time because you have fluency with AI or 50 times more. It becomes very hard to justify.

hiring someone who's not as good at you with AI. So I think there's going to be a tremendous shortage of people who are fluent with AI and a tremendous surplus of people that are not fluent with AI. And what our job in society is, is to make sure that we can take that group of people that are not fluent with AI and retrain them and get them up skilled and re skilled for an AI based world.

post-agent. And so that I think is a pretty important dynamic to keep in mind is as we go through this, if the upside is we can actually cure cancer and we can not just cure cancer, we can cure any disease. We can not just make a growing middle class. We can eradicate poverty.

We don't just actually get people to get basic level of skills dexterity. We can actually have everyone become as smart as a PhD student because education gets evenly kind of disseminated everywhere. That has a profound impact on society. But in order for that impact to be realized, you have to then make sure that you've got the guardrails in place. But if you don't do that, if you decide to say, you know what, I'm just gonna wait, I'm not gonna give it access, I'm gonna be sitting on the sidelines for a while.

Jeetu Patel (22:04.844)
You will miss out on all the opportunity that everyone else is going to capitalize on, but you were actually left behind. And the difference between being first and being in the middle of the road speed wise is going to be the difference between a winner and a loser. Operating with extreme speed for long durations of time where you can actually typically try to be early to market as a matter of habit.

is going to be one of the biggest strategic cultural modes that you can think of. And so as you think about your organization, on one end, you have to actually encourage them and require them almost to move really fast. And on the other end, that speed cannot be compromised with safety. You have to have both and cannot be compromised with security and it cannot be compromised with governance and trust and privacy. And so though, you know, the

that has to be thought through as well. And I think it's a pretty important dimension to keep both of those pieces in mind.

Sabrina Ortiz (23:08.296)
This reminds me of something that you said in your last blog post that stood out to me and it was like security as the accelerator. I'd love to hear more about your thoughts behind that. I think that's a pretty fascinating concept because especially in Silicon Valley, it's a lot of the move fast, break things philosophy. But security is the accelerator is almost like the exact opposite. So I'd love to hear more.

Jeetu Patel (23:30.938)
In the past 30 years or so, security has been looked at as a limiter, kind of like a governor that needs to be implemented, which is, do you want to be productive or the other alternative is do you want to be secure? And that's always been the balancing act that we've had to make. This is the first time where safety and security is almost a prerequisite for adoption.

And a prerequisite for the acceleration of adoption of AI. Because if you don't trust a system, are you going to give it your bank account number? No. If you don't trust a system, are you going to give it something personal that you want to have done? No, you're not going to do that. But if you trust the system and if you feel like there's enough guardrails, then you'll go ahead and do that. Just like we do for so many things where we are okay giving our personal information when we open up a bank account.

Why? Because we trust that institution. Imagine if you didn't trust that institution, would you give it all your money? Would you give it? I mean, there's the foundational elements of the society operating is based on trust. I trust a bank to tell my company to pay the bank instead of paying me. And I will have access to my bank account. And that's how I actually have been living my life is I have an institution in the middle that becomes that intermediary custodian.

and they charge a fee for that and that fee is in the form of float and the form of all the things that a bank would do. But that established trust with that entity by the consumer, by the people was super important. AI will be the same way as it's going to be. Trust is going to be a prerequisite that sits between AI and me. If I don't have trust, I'm not going to use AI. If I have trust, I'm going to use AI. And I feel like that is a

fundamentally different way than the cybersecurity market has been operating for the past 30 years. And I think it's a prerequisite for success. It's a prerequisite for adoption.

Sabrina Ortiz (25:41.62)
question though, because it seems almost like a self-fulfilling prophecy. Like somebody will trust a model, which is necessary for them to do well, but then, you know, it will go a viral case of, for example, of open claw when the person's inbox got entirely wiped or trashed. Yes, exactly. And so then there's lack of trust and then people pull back and then people aren't using it to its full potential. And then again, somebody will and then out of somebody will again, it will fail and then

Jeetu Patel (25:57.806)
wiped out,

Sabrina Ortiz (26:09.82)
Again, it's like a self-fulfilling prophecy. Yeah, what do you advise there?

Jeetu Patel (26:14.936)
I think that is the central reason why companies like Cisco are in business is because what you do is you just because you've given the agent the right to delete an email does not mean that it should be deleting 10,000 emails. And if it does start to delete 10,000 emails, you know that something is wrong. And before it actually starts to drift into that behavior, your system has to be smart enough to know that this is anomalous in what it's trying to do. And I'm going to intercept it.

And the more that we mature on the security front and the safety front and the trust front, the more those kinds of things will not happen because it's unacceptable. Like today, what happens when an agent does something wrong? Let's say that the agent debited $40,000 from your bank account. gave this example in my RSA keynote recently, you know, and it's like you, you deposit, you asked the agent to say, Hey, go plan an offsite for me in Napa, handle everything.

Agent gets to work, know, agent starts going out and looking at hotels and looking at bookings and looking at what kind of availability there is and looking at the venues and what kind of food people need to have and all of that stuff. it's just, once it's gone out and done that, and it's done some research, it starts actually making decisions and saying, I'm going to book a hotel. I'm going to make sure that I get a car booked. I'm going to make sure that I have a plane ride booked for the person. on and so forth.

As those things happen, it might decide to say, okay, and I'm going to go pay $40,000 for this one thing that I wanted to do. And that was not something that you had given it explicit permission for. And so you then realize, oh my goodness, something wrong happened. Hey, WTF, why did something go wrong? Right. And what does the agent do? So I'm really sorry. Noted. I'll keep it in mind for next time. Well, but you're still out $40,000.

These are irreversible actions. And so what you have to do is you have to make sure that there are systems in place that post-login, post-granting permission are still actually gauging whether or not there's rationality in the behavior. And if that behavior is not proving to be within the bounds of what's acceptable, then you know to intercept that behavior. And you also have to know when it is an agent working on your behalf,

Jeetu Patel (28:42.818)
versus when it is you working by yourself. Because both you and the agent are going to use your computer, let's say. And if the agent's doing computer use, it's going to pretend to be you and go out and use the browser. Well, we need to make sure that identity of agents has to be distinct from the identity of people. But that there's an associated person with every agent that that agent is accountable to. And those are all aspects that need to be thought through.

Sabrina Ortiz (29:10.91)
curious what you think about all of these solutions that are currently being offered to enterprises to detect these kind of actions that also use AI, essentially like AI for good catching AI for bad. There's all sorts of it, whether it's like actual bad actors or just again, an agent going rogue. How can we trust the AI for good or the good AI to actually get the action that's necessary done the right way?

Especially when technically you're employing it because we don't necessarily trust that AI will always get everything right.

Jeetu Patel (29:45.806)
See, the question is not how can you trust AI. We have to make sure that we figure out mechanisms to trust AI. The more germane question to ponder is if I don't use AI, is there any chance that I'm going to be able to combat an adversarial attack on my turf that could actually cause a lot of harm? Like imagine that nation states

might employ agents and say, go take out critical infrastructure in a certain company or in a country. Attack all power grids, attack all hospitals, coordinate that attack. And I'm going to go deploy a billion agents towards it. Like if I don't have a mechanism to go out and combat that attack at machine scale, then I'm just going to come down to my knees. You know, and it becomes a very hard thing.

to go out and solve for unless you have a commensurate defense posture at machine scale, just like you have an attack posture that is inevitable. That's going to happen at machine scale. Because, you know, there's no reason for a bad actors to not use the same technologies that we're using. all have access to it and they're going to continue to use it. And when they do use it,

we have to make sure that we have a level of responsiveness to be able to just go out and adjust to that.

Sabrina Ortiz (31:17.834)
So it's like.

Jeetu Patel (31:18.058)
And that in my mind is one of the kind of issues of the decade that has to be solved is how do we make sure that these agents stay safe, they stay secure, and that the attack surface that is now opened up quite materially, that attack surface that's opened up materially can be defended for at machine scale, not at human scale.

Sabrina Ortiz (31:46.086)
Right, so we're seeing the same cat and mouse chase that's been like the long, as tail as old as time for cybersecurity, except though that now the speed of AI is making the speed of everything move so much more, yeah, exactly, exponentially quicker. So yeah, so how do you, guess, again, offering those infrastructure and offering the solutions, find ways to every day stay ahead and offer your clients the most cutting edge.

Jeetu Patel (31:59.776)
Instant. Instant. It's real. Yeah.

Sabrina Ortiz (32:14.174)
protection when everything, every single day is changing.

Jeetu Patel (32:17.922)
Yeah, so I think it's a very important point. Firstly, like I said earlier, zero chance that you can do that without a machine scale response. So what does that look like? Like how would you change that? So let's say that there's, let's take a very concrete example. There's a security operation center, what they call a SOC, that would typically have analysts that are looking at all the breaches that are, and threats that are occurring and, you know, looking at all the

Incidents that are happening and saying what can I do from a threat investigation standpoint to investigate an incident that happened and ensure That I can actually tackle them respond and remediate that incident, right? And so what tends to happen is you have all this Signal coming in in the form of alerts now, what is it that in the classical sock that we've had people have had Had an issue with they have alert fatigue

Thousands of alerts coming in, thousands of things that could be wrong. And as a human, might have, say, let's even say they're a generous company. They've got 20 sock engineers, which is a lot. And these sock engineers, they don't work like everyone else. They work like, you know, 15 hours a day. That's a lot of hours that they're going out and tracking this stuff. And it's seven by 24, but even despite that, most organizations can only patch 20 % of the vulnerabilities that are exposed. 80 % of the vulnerabilities that get

identified and announced and exposed in the public market and the public domain never get addressed. Why is that? Because there's just not enough resource. There's 4 million jobs that go unfulfilled every year in cybersecurity. There's just not enough qualified resource to be able to go out and patch every vulnerability that gets identified. Now here's the catch. Now, while all of that was happening, imagine that

this attacker, the adversary, the threat actor, said, you know what, I just got funded. I'm going to go out and deploy a billion agents to go out and do what I was doing.

Jeetu Patel (34:21.806)
My goodness, now you have a billion agents doing that. And so what you used to be, like I would ignore 80 % of my alerts and only go out and address 20 % of my vulnerabilities. Now I'm going to be able to do even a smaller amount of that because the volume just compounded materially, you know, by a billion times. And so as that volume compounds, what you have to do as a defender is you have to say, the only way I can respond to this is at machine scale.

There's no other way to respond to it in any other way. And if I'm responding to it at machine scale, I need to make sure that, you know, I have a commensurate response to the scale of the tax that I'm getting. And that, that I think is the, so instead of having a sock, a security operation center, you will now have to have an agentic sock.

Because in the absence of an agentic sock, you're not going to be able to go out and respond to the threats. That's what I mean by machine scale. So I do feel like these kind of the shift that's happening fundamentally shifts the architecture of security. A non-agent deployed system will not be an efficient and effective system moving forward against an agent scale attack. That doesn't mean humans go away.

That means that the humans by themselves are going to be woefully insufficient in quantity and intellectual horsepower to be able to go out and do things that they need to do. And so they have to employ agents and delegate work to agents to be able to do that. Now, how do you make sure that those agents stay? You specialize those agents, just like SEAL Team 6, you specialize them on something that they're going to be doing extremely well.

You enrich them with data and context and you provide them with a mechanism and a mandate to say, no alert shall be ignored. And no signal is something that you should ignore. You should actually research every single one of them. And we are now finally at the point of being aware. You can be in a world where you're not just statistically determining probability of whether or not an attack is dangerous. You can actually look at all attacks.

Sabrina Ortiz (36:43.124)
made a really interesting point there too that it's not that we're replacing current people who have the knowledge of the business, knowledge of attacks, knowledge of how the organization internally works and all the things that perhaps the context that an AI would not be able to as easily have. Those people are just going to expand their capabilities with these agents. When we see headlines about the jobs that AI is replacing,

Typically the ones that are mentioned are roles like coding or cybersecurity or computer science or anything that's more on the technical side because the logic is that you could just offload that. But using what you just said, it sounds like that would actually be a disservice to the organization where the right mentality would be more how can we augment our current workers capabilities with AI.

Jeetu Patel (37:33.75)
Yeah, I actually think what's going to happen in this because this is a very, very active conversation thread that goes on, which is are people going to be not needed or are we going to have bottlenecks in humans right now? And I feel like there's going to be two classes of issues for those people that have domain expertise in a certain domain and they have AI fluency.

I literally think they'll be 10 times in more demand than what they've ever been in their lives. You know, if someone is a security analyst and they have AI fluency, they're invaluable to a company. The people that don't have AI fluency and only domain expertise are going to struggle because they won't be able to operate at the speed that the people that operating with AI fluency with domain expertise can operate. So that Delta is actually going to be the difference between

in a success and failure. And so we, so what does that mean? That means that every person in virtually every walk of life needs to start thinking about gaining AI fluency because that is the ticket to relevance in society. Up until now, relevance in society was established through, procuring a skill.

and having intellect about certain things. Intelligence is now commoditized. Everyone can have it. Just because it's evenly distributed, it's commoditized does not mean it's evenly distributed. You still have to make sure that you can figure out a way that you can utilize that intelligence to your benefit. And that only happens when you actually start thinking about this from the standpoint of...

I'm going to learn these technologies. I'm going to make sure that I'm utilizing them on a daily basis. I'm going to depend on them for getting my job done. Not the other way. You know, like I always say the levels of maturity, the first level of maturity is when people think, my goodness, AI is going to take my job. The second level of maturity is when they're like, hmm, AI might not take my job, but someone that uses AI better than me will take my job. And then the third level of maturity is when they say, I just realized I can't get my job done without AI.

Jeetu Patel (39:56.278)
I'm going to have to use AI to get my job done. And then the fourth level of maturity, which is emerging is I am so good at using AI to get my job done. And I am so fast at it that I'm better than anyone else in the world at doing that, that my throughput capacity dwarfs everyone else. And those people are going to be the cream of the crop. And that doesn't matter in what field.

You can do that in podcasting Sabrina. And you can say, I'm going to use AI to go out and do the way I promote my podcast, the way I edit it, the way that I, you know, talk about it, the way I summarize it, get glean insights from it is going to be completely different because I'm going to use AI. Yours will be a better podcast and someone who says, you know what, I'm not going to use any of those technologies.

Sabrina Ortiz (40:41.652)
Can you define AI fluency as you see it? Because I think every day the term AI literacy or AI fluency has different meaning. Again, like three years ago, I would have been knowing how to maybe enter a prompt in chat GPT. Prompt engineers were all the rave back then. But since then, you know, all of us are kind of mini experts on AI and on a very, very small scale at least. So what does AI fluency as you see it mean today?

Jeetu Patel (41:09.026)
A few things. The first thing, firstly, the thing is the definition will continue to keep changing. But what are the constants? Constant number one is staying current with the developments in AI and what's happening in the market and understanding them deeply and spending the time to research them and understand them and stay on top of them is a superpower that I don't think can be overstated. I spend about three hours a day, Sabrina, of my day.

just keeping up with what's going on in the market. It's typically between nine and midnight, but I usually spend a lot of time during that time. The second thing is you have to be open to experimentation. And so whatever tools and techniques that you have that you might have heard from someone, don't hesitate and don't be afraid to use it yourself and just get a feel and an instinct for it and get...

get to understand the texture of that system and get to understand the texture and the nature of the way the problems are going to get solved. And if you can do that right, boy, there's a lot of upside for you as a person because you just have created asymmetry for your skills compared to someone else's massively, which over time, the gap only widens. It doesn't compress. And so I find that

The people that are extremely curious about AI, not only are they good, their rate of growth is so fast that they're going to be better tomorrow and they're going to be better the day after and the better the day after. So when they start keeping on getting that much better, and if you say steady, by definition, you're not staying steady, you're getting worse every day that you're not actually getting there and that you're not using it. So you have to experiment with it and you have to stay current with it and you have to do it for long durations of time.

Because the power of compounding doesn't happen until you're a one day learning on this stuff. It happens by becoming a habit. That's like going to the gym. You have to stay up to date. It just takes a little longer than a 35 minute workout.

Sabrina Ortiz (43:17.684)
Yeah, and I think you make a good point too. Even if you distressed AI or you don't like the technology, you don't really want to be involved, the best way I guess to stay ahead of it or to, I guess, make yourself immune to it would be to understand it, correct?

Jeetu Patel (43:33.262)
And yes, absolutely. But it's also that these things are changing so fast that just because today it doesn't work that way, it doesn't mean that in two months it's not going to. So what I tell my team, because like when we were starting to learn how to code with AI, I had a lot of resistance from the team. Jithu, it doesn't really work that way. You need to make sure that you actually are realistic about the expectations you have. This is going to take years before it happens. You know, and then all of a sudden there was an unlock moment.

that happened recently, like right around February, but the models got better. And now all of a sudden, you know, like we had our first product that was a hundred percent built with AI. We'll probably have a half a dozen products built with AI a hundred percent by the end of the year. And we'll probably have 70 % of our entire estate of code. That's going to be written only with AI, no human lines of code. you know, we will refactor over the next couple of years, every single piece of code that's legacy code you will not have. There's no such thing as a legacy company anymore.

And so that in my mind is so profound in its implications that, you know, what you have to think through on that front is, so if, if that is the case, and if I am learning to learn and I know that what I have to do is I can't look at something on a, on a day and then say, this isn't working. therefore I'm going to shelve it for a year and I'll come back to it.

You have to assume it's going to work within 60 days and you have to plan such that in 60 days when it starts working, you're the first one to take advantage of it. And that I think is the really important dynamic. That's a, it's a very counterintuitive way to operate because most people like operating with some level of data and proof, like show me that this thing does something before I do it. Over here, it's a leap of faith. You're saying within a matter of a couple of months, I know given

the data I have on how quickly the models are growing, given what I'm seeing the models do, given what I see what the next model might look like. I don't think I'm going to just be able to ignore this for 60 days. I'm going to need to make sure that I really get a feel for it so that when it's ready, I'm the one of the first to go out and take advantage of it. Early to market is everything.

Sabrina Ortiz (45:52.82)
That's really great advice. And as we wrap up along those lines, I want to ask you one last question. For people who are currently experimenting, they are tinkering, they're trying to implement it in their workflows or have already, what can they do to take it to the next level, to just supercharge their workflow or see more results than they already are?

Jeetu Patel (46:15.928)
Firstly, use multiple different tools because every product is evolving differently. And so you'll get ideas from different tools and how they're thinking about it. Part of the whole point over here is to get a texture of what's happening in the industry. The second thing is there are these things called Markdown files, which basically are, know, preferences that you can give in plain text to the AI model. Learn to get good at Markdown files.

Learn to build those, learn to use those things like cloud code. You know, you have 150 million GitHub developers today. I think you'll have 3 billion builders of AI agents in 6 to 12 months. 3 billion. That's almost a third of humanity that'll actually be building agents, you know? And if you can have that level of exponentiality and volume increase.

What you have to have is the ability to be able to experiment with this and get better at it and start doing it yourself and getting a feel and an instinct for it. And then when something works, just be diligent about incorporating that into your daily process of how you work. I think that's the most important thing, you know, and, and completely suspend.

your previous notions, your unlearning ability is going to be so important right now because you have to suspend your previous experiences and notions. If someone says to you, well, I really like this document. I really like this concept and strategy that you talked about. Can you just get me a document that has that? And can you just get me a PowerPoint presentation that we can do? That is not a two week task anymore. That can be done overnight, you know.

And it's not just documents. Can you write me an application? Can you do something that needs to be done? Like those timeframes are compressing unnaturally. The hardest thing in this is not that the timeframe is compressing. The hardest thing in thinking in this is the human's getting rewired to adjust to that compression of timeframe and then conducting the operations within society in that compressed timeframe.

Jeetu Patel (48:34.315)
without actually creating a level of stress and anxiety that makes people snap. That's what you have to do. And that I think is the hardest thing. The change management aspect is non-trivial. And I think it does bring people anxiety. And that is okay. And it's a normal reaction. And everyone feels slightly behind. And everyone feels like, my goodness, I need to catch up. And from the moment you wake up, and that is true.

And that's because the market is moving at a frenetic pace. And what you have to think about is I'm not going to be reactive and frenetic to the market movements. I'm going to be responsive with a sense of urgency to the market movements. And if I can distinguish between those two and use AI as a tailwind and not fight it, you will be superbly successful. And you don't have to worry about much. you fight it and you're not

using it as a tailwind. I mean, it gets to be pretty difficult in about six to 12 months for people.

Sabrina Ortiz (49:37.214)
All right, on that pretty forward looking note and good advice, I want to thank you for your time. There's such an excellent chat G2. I hope that all of our listeners learned a thing or two, because I know I did. So thank you.

Jeetu Patel (49:52.59)
It's always a pleasure to see you Sabrina. Thank you for having me.

Sabrina Ortiz (49:55.176)
Yes, of course. Before you leave, Jason will probably hit the stop recording button and