The Deep View: Conversations

In this episode of The Deep View Conversations, we talked with Wasim Khaled, CEO of Blackbird AI, to explore a provocative idea: What happens when reality itself becomes hackable?

Long before generative AI went mainstream, Wasim and his cofounder launched Blackbird to tackle disinformation and narrative manipulation. Their thesis was bold: that part of modern cybersecurity conflict had shifted from infrastructure to information, from networks to narratives.

It turned out to be prescient.

As AI supercharges the speed, scale, and realism of malicious content — from deepfakes to coordinated influence campaigns — Blackbird has emerged as the leader in combating narrative attacks. In fact, Gartner recently named Blackbird the company to beat in disinformation narrative intelligence in its report on the AI Vendor Race.

In our conversation, we explore:
+ What “narrative attacks” really are and why they’re so hard to detect
+ How AI has fundamentally changed the disinformation battlefield
+ Reactive vs. proactive defense strategies in cybersecurity
+ How Blackbird evolved from a lab experiment into a national security player
+ Why leaders relying on chatbots instead of AI agents are already falling behind

Wasim also shares how he optimizes his time for maximum leverage, and offers his advice for founders navigating fast-moving technology shifts.

If you care about cybersecurity, AI, information warfare, or the future of leadership in the age of intelligent agents, this is a conversation you'll want to hear.

Subscribe to The Deep View: Conversations podcast
in your favorite podcast player for more unique conversations with the brightest minds solving the biggest challenges in AI. You can also subscribe on YouTube.

And don't forget to sign up for The Deep View daily newsletter. We don’t just cover AI, we decode it. In a world flooded with hype, we deliver sharp, no-nonsense insights that keep our audience ahead of the curve and help them put AI to work every day: subscribe.thedeepview.com

Creators and Guests

Host
Jason Hiner
Editor-in-Chief of The Deep View

What is The Deep View: Conversations?

From frontier labs and enterprise platforms to emerging startups reshaping entire industries, The Deep View: Conversations podcast interviews the brightest minds and the most influential leaders in AI.

Jason Hiner (00:05.902)
In this episode, I talked to Wasim Khaled, the CEO of Blackbird AI. Blackbird operates at the intersection of a lot of powerful trends, from cybersecurity to information warfare to deepfakes. Wasim and his co-founders started the company during the middle of the last decade to address disinformation and narrative manipulation. Their founding thesis was that perception itself was becoming an attack surface. That led them to spin up a startup that Gartner recently named the Company to Beat,

for Disinformation Narrative Intelligence in its report on the AI vendor race. I talked to Wasim about how the company evolved from a lab experiment into a key player in national security and cybersecurity. In our conversation, Wasim elaborates on the concept of narrative attacks, their implications, and how Blackbird's solutions have adapted to the changing landscape, particularly the rise of generative AI, which has made it easier and faster than ever.

for attackers to create and distribute malicious content. We also talked about reactive versus proactive approaches, one of the timeless dilemmas in cybersecurity. Lastly, Waseem shared his tips for optimizing his time for maximum leverage as an executive. He also explained why if you're a leader and you're still relying on chatbots more than AI agents, then you're not keeping up. Trust me, you don't want to miss that part of the conversation. All right, so here it is our conversation with Wasim Khaled of Blackbird AI.

Jason Hiner (00:02.661)
All right. Well, Waseem, I thought we would start by talking a little bit about why you started Blackbird. What was the thesis? You've been at this for a while, certainly a little bit before the AI boom took off. You've been working on some of these things. And the thesis that you had, as I understand it, has only gotten a lot more acute and a lot more powerful in the years since you began. So maybe when did you start and why?

Wasim (00:15.148)
That's right.

Wasim (00:31.374)
Yeah, absolutely. And first of all, Jason, thanks for having me on. look forward to having the discussion. So, you know, we started, my co-founder and I, Narshe Uzzaman, we go back quite a ways, both computer scientists and back in as early as 2017, you know, I had gotten out of a prior startup. He was on a research team. He had worked at Microsoft Nuance, AI space way back then.

The problem that we were thinking about was disinformation and information manipulation. There were a lot of people talking about fake news and bots, but that's not really what we were thinking about. Because what we were thinking about was we were seeing things online that didn't quite make sense. Markets would move, people would get targeted, institutions would get destabilized, it would be driven by conversations on social media.

Jason Hiner (01:14.267)
Okay.

Wasim (01:23.928)
There were no cyber breaches or system failures creating this damage, which was more typical of, I guess, electronic warfare and things of that nature. I think what we really saw was that the common thread was this manipulation of human perception. So we had this hypothesis on, what if coordinated narratives are being shaped by outside forces, by adversarial actors, and they're using techniques and tradecraft to change outcomes to their benefit, financial, reputational, societal.

Jason Hiner (01:33.328)
Yeah.

Wasim (01:55.584)
And this seems like kind of common knowledge now. Back then, I will be honest, some people called us conspiracy theorists when we said like thought networks were shaping opinions online. They would say, hey, these are people.

Jason Hiner (02:02.299)
Mm.

And this was to 2017, right? Just after the 2016 election where this blew up. okay. Before.

Wasim (02:10.392)
Well, actually 2014 and 15 is when we first started talking about it amongst ourselves while we were doing other things. In 2017 was when people started like, okay, maybe something's a little fishy, but they didn't really understand like the veracity and the widespread nature of what was really already happening at the time. So we were thinking even then that perception was becoming an attack surface. And we set out to prove that.

Jason Hiner (02:18.319)
Okay.

Wasim (02:38.894)
And that's really how the company was conceived. It really started as a lab experiment and ultimately became a prototype that ended up putting us square in the middle of national security scenarios that we were helping national security organizations with. And of course, more recently, something we were concerned of even back then were things like artificial intelligence collapsing the cost and ability to scale.

Jason Hiner (03:05.723)
Mmm.

Wasim (03:07.5)
what we call narrative attacks. So that's really what we saw coming really early on. And in some ways, I'm unfortunate to say, it's unfortunate to say that we were correct in where those things were gonna go in the future. And today we find ourselves in wholly manipulated realities online.

Jason Hiner (03:09.275)
Yeah.

Jason Hiner (03:27.099)
Yeah, wow. there's so much to unpack there, Waseem. I mean, that's a lot of history to get you here. Maybe you could unpack. your original thesis, was it about narrative attacks? Or did narrative attacks, has that come to be the phrase you've used to describe this? Yeah, tell us a little more about that.

Wasim (03:49.848)
Yeah. Absolutely. mean, one of the things that are still sometimes a problem in this kind of narrative intelligence category as it's starting to get named today by Gartner and others is that like there really wasn't a name, there really wasn't common terminology to describe this kind of attack surface. It's kind of like once upon a time there were no...

Jason Hiner (04:01.337)
Yeah.

Wasim (04:12.142)
there were no words for like ransomware, right? Things of that nature. And so we were looking at all of these different symptoms within the information ecosystem.

Jason Hiner (04:14.64)
Yeah.

Wasim (04:21.386)
risk signals, used to call it, information manipulation, and there's still a lot of different terminology for it, cognitive warfare, in Europe, a lot of them call FEMI, foreign intelligence manipulation, things of that nature. Narrative attacks actually was something that's only about two and a half years old. It was actually my team and I, as well as our CMO, Dan Loudon, just sitting in our office saying we have to come up with something to actually put all of these things in a bucket so people can understand.

Jason Hiner (04:33.807)
Okay.

Wasim (04:51.36)
Narrative attacks, as we describe it, is a coordinated attempt to shape belief and behavior at scale to create a real world outcome, usually harmful. Financial loss, operational disruption, executive risk, regulatory pressure, whatever it might be. A narrative attack typically would not have happened on its own. And that's the distinction. These are not trends or sentiment. They're forced narratives without people actually knowing.

Jason Hiner (05:00.347)
Okay.

Wasim (05:19.094)
Right? And that distinction really matters because enterprises nowadays in these scenarios don't fail due to like one thing that you might say in the news. It's like, how is that narrative carried? Who is it delivered to? And who's really driving that?

So when perception drives decisions, before the truth catches up and you don't understand what's happening, you're really on the back foot. And we put all of these things into this term so we could call it something and then we could explain it to people. There's been a lot of education over the past few years, but people are really starting to get dialed into the problem set now.

Jason Hiner (05:40.399)
Hmm.

Jason Hiner (05:55.548)
So when you started this a decade ago, over a decade ago now, you had this theory that this was a different attack surface. It was a attack surface that didn't have, this wasn't code related, or maybe it was. Was it code related in some cases, or was it mostly other forms of misinformation and disinformation?

Wasim (06:18.062)
Well, I think the key thing here was it's an opportunistic situation, right? Because there's always going to be things that people can latch onto that can create controversy and attention. And I think the threat actors that are out there today, they wait for anything. Sometimes it's something very small. And they use these things, these opportunities to almost create like a lightning rod.

for their purpose, sometimes that lightning rod goes through a company because by using a company you can draw attention to it, especially like say a Fortune 10, Fortune 50 company, or a person who is, you know, high visibility, right?

Jason Hiner (06:47.097)
Mmm.

Jason Hiner (06:55.63)
Okay.

Wasim (06:59.586)
You can use those entities, those people, those organizations to drive a narrative attack. You could have a short sell on the other end of that narrative attack. You could have a very partisan agenda or political kind of situation that might be brewing around a geopolitical conflict and use that to damage a company because of a position they might have taken. And the goal there is a narrative attack has some form of synthetic amplification behind it, right?

Jason Hiner (07:08.891)
you

Wasim (07:29.72)
and very specific targeting. I think that's the thing that really differentiates it from what this bad publicity or organic conversation tends to be.

Jason Hiner (07:39.95)
Okay. So.

At the moment you launched this, or that you started the work of the company, I guess I should say, this was at a time when in the 2016 election in the US, the sort of fake news became a term. there was also like fake news about fake news. People would call, if they didn't like something, they would call it fake news. But there was this

charge or perception or and even it went to the level of investigation around did threat actors Russia in this case you know

Did they manipulate forums and things like this to change sentiment and that kind of thing? When all of that happened, this was right as you all were getting started, how did that play into the ways that you all thought about it and sort of the work that you thought you could do to help alleviate some of these, what had suddenly burst on the scene as a really challenging situation where people couldn't be sure about.

what they were reading online was true.

Wasim (08:56.844)
Yeah. I think in some ways, didn't really change our thinking much because we were already two years into like examining the ecosystem. And a lot of that ended up being noise in some ways because the...

Jason Hiner (09:05.243)
Okay.

Wasim (09:16.062)
mechanics, the tradecraft of what was actually happening in the environment is what we focused on the most. Meaning the notion of whether or not something was true or not, right? That was something we really didn't look at as much in the early days after like even spending a couple of months in the space because what we realized was what people really need to understand is any narrative, whether it is demonstrated to be true or false or contextually accurate,

Jason Hiner (09:22.394)
Okay.

Wasim (09:45.568)
A narrative attack can happen whether something is true or not. It's more about how is that narrative being propped up and driven. So our approach was always looking at networks. So that was one area where we really deterred from the traditional thinking. The contagion-like effect and spread of any narrative, whether it is accurate, true, false, actually is a really good indicator of intent and tradecraft and threat actors. And so...

Jason Hiner (09:54.074)
Mmm.

Okay.

Wasim (10:15.51)
We have never purported to be kind of like this, like a true false indicator because of all of the gray area that exists within narratives, nuances, and just human opinion. But it becomes really important to understand if someone is trying to drive one narrative over the other, two specific groups with bot networks, if there's state actors driving those narratives, those are the indicators that we're really focused on and that we really still do focus on today.

Jason Hiner (10:21.711)
Mm.

Jason Hiner (10:26.117)
Sure.

Jason Hiner (10:45.519)
So when did you release your first product and what was the first ways that you approached trying to solve this challenge of manipulating information?

Wasim (10:58.412)
Yeah.

Well, we launched an engine essentially when we first came to market back in 2021 that could look at massive amounts of data and just sort it into narratives, networks, actors. Here's the narrative, here's how it's spreading through the networks, are the actor groups and communities that are trying to drive it in a manipulated manner. That turned into a more SaaS-based dashboard. We call it the Constellation Platform. We launched that back in late 2022.

primarily to enterprise customers. So Fortune 50s, Fortune 500s, crisis comms companies, companies like Weber-Shandwick under IPG, we're using this quite extensively to help understand how narrative spread. And then from there, especially in the last 12 to 18 months, cybersecurity and threat intelligence has become a core focus of ours because this problem has become a core focus of many people in threat intelligence space.

And so that's really how it's evolved. Started with this national security product, you know, engine API that we launched, going to our own product for enterprise. And now we're again, providing API that is actually agent-based API, agentic response systems with API. We're going to be our newest products a little bit later this year, but heavily targeted towards cybersecurity and threat intel.

Jason Hiner (12:23.547)
Very good, so.

One of the biggest challenges in these kinds of spaces and having products like this is you don't often get credit when nothing happens. If nothing happens, that's great. You accomplished it, but nobody says thank you. Nothing has happened for a while. In cybersecurity, of course, that's the way. Is it that way for you all? What does success look like for your customers? And do your customers say thank you for helping us do X for not being the victim of

these kinds of crimes or misinformation. How does that look like for you all?

Wasim (13:04.11)
Companies typically are thinking about using us for strategic decision making, often at the C-suite level or at the board level. And what they're looking to understand again is why are we central to this narrative and how do we respond to it? Sometimes do we not respond to it at all? In terms of nothing happening, mean, when we're deployed, there's one or two things. Most of it is because something did happen already and they want to prevent it in the future. So a lot of this stuff is reactive.

Jason Hiner (13:08.792)
Okay.

Jason Hiner (13:30.127)
happening again.

Wasim (13:33.87)
Given that this is a new category, right? It's not something that people are thinking about right off top of their head. Proactive customers are less frequent than reactive, right? And therefore, you don't really have this scenario of like, nothing's happening because they come to you in a mess, right? So that's...

Jason Hiner (13:34.011)
Okay.

Jason Hiner (13:54.268)
You're helping them clean something up basically. Yeah.

Wasim (13:56.97)
Yeah, yeah. that's like incident response. Think of it as like post-incident response analysis and monitoring like in the cyber space, cyber security space. However, we do see signs in our pipeline and otherwise that there's a lot of proactive thinking going on now, right? There are a lot of people who have nothing that has happened to them, but something's happened maybe in their industry.

or to appear. And they understand that this isn't a black swan event anymore, that you're kind of just waiting for something to happen to you unless you have something that can have early leading indicators. Particularly true for a lot of Fortune 500s around executive protection. So one of the products that we launched about a year ago now, something we call Raven Recon, and that is around executive protection across 26 different risk factors that happen almost in real time across

Jason Hiner (14:47.195)
Okay?

Wasim (14:51.792)
different platforms to help understand if your executives, your CEO, any of these, like I say, top leadership teams or your employees and teams are under physical risk, potentially based on narrative risk. And that ties in more with the CSO, the Chief Security Officer, physical protection. So what we have found, and what's really interesting about that, that we wouldn't have really thought about like in the early days of the company is narrative risk is kind of parallel to this

Jason Hiner (15:06.062)
Mmm.

Jason Hiner (15:10.971)
Okay.

Wasim (15:21.744)
transition happening in the cybersecurity world around really focusing on cyber physical risk, right? What's happening in the electronic space and the direct spillover into the real world, critical infrastructure, supply chain, physical protection. And so it's interesting to have started to get pulled in on a lot of those use cases as of late.

Jason Hiner (15:45.156)
Is that when you say physical risk, is, is that you're seeing digital signals that could relate to a fact that a person like an individual might be at risk of attack or manipulation? Okay.

Wasim (15:54.648)
Correct. Yeah, yeah. And suppose it's something where you see a ground swell of negative sentiment, but not just sentiment, like a number of other risk signals that we can look at, they might say, hey,

Jason Hiner (16:11.195)
Okay.

Wasim (16:12.618)
If this is happening, everybody in the airline industry that is like of leadership might have to go from what you thought might have been green to orange or red from a physical protection posture, right? Whereas if you're waiting for someone to show up in your lobby, at your house, which is typically what physical protection looks at, you could be very late to the game. And so this is a earlier leading signal that you might wanna elevate your posture.

Jason Hiner (16:22.373)
Hmm.

Jason Hiner (16:32.4)
Yeah.

Jason Hiner (16:40.901)
Gotcha. Okay. So in that case, it is a little bit proactive, right? Helping, helping fine. Because cybersecurity at the end of the day is all about risk and

Wasim (16:51.318)
All of our technology and use cases are proactive. The question is whether the clients are proactive in actually coming to us when nothing has happened yet, just because they know it's a must have. So we aren't there yet as a category, because it's new, it is more reactive. So, no.

Jason Hiner (16:54.775)
Okay.

Jason Hiner (17:01.421)
Okay. Okay.

Jason Hiner (17:07.695)
That makes sense. heard somebody recently describe in the cybersecurity industry, describe the act of investing in security solutions as buying down risk. And I thought that was a very helpful way to talk about, because sometimes it's challenge to, if you are...

bringing a solution to a company to sell them on the ROI, right? But if you think of it in terms of buying down risk, I wanted to get your thoughts on that and what are some of the ways that you talk about this when you are bringing the solution to maybe some of those more proactive folks that aren't in a state of maybe emergency where they're like, okay, this happened to a peer or it happened to us and we need to protect ourselves. We'd love to hear about how you all talk about that.

Wasim (18:00.396)
Yeah, it's become easier as a fleet.

Right, and it was hard five years ago, right? And the reason is, unlike today, back then the world wasn't printing case studies every single day, right? Today it's a bit easier and you have third party validation of the risk. So I'll give you an example. mean, so Gartner, has more, packaging this as narrative intelligence category, and they have quite, they probably have like a dozen now, reports around this space, have dropped a couple of,

Jason Hiner (18:04.484)
Okay.

Wasim (18:34.224)
statistics that are I think really eye-opening that we often convey from an education perspective to clients. One is that they have dropped by end of 2028, there'll be approximately $500 billion in damages as a result of disinformation, misformation, narrative attacks. That 30 billion will be spent against misinformation, disinformation and malinformation by end of 2028.

Jason Hiner (18:38.53)
Okay.

Wasim (19:01.35)
And that constitutes almost 50 % of the Fortune 500 investing in this, whereas it's less than 5 % today. And that's some really fast growth given we're talking about a two year horizon. A lot of that again is the compressing cost of being able to run these narrative attacks via agents and things of that nature. So that whole paradigm is shifting in terms of how much damage is being done. You can look at just like, I could think of like one company that due to narrative attack series of

Jason Hiner (19:09.261)
Okay.

Jason Hiner (19:21.572)
Yeah.

Wasim (19:31.136)
narrative attacks lost $20 billion plus in market value on their own, right, and never really recovered, right? The average CEO or CISO that we speak to for very large companies say that a successful narrative attack can hit their market value as much as 25%, right?

Jason Hiner (19:35.611)
you

Wasim (19:49.676)
And you think about the scale of those companies, how much shareholder value it potentially lost. So these third party facts and figures and the fact that these things are actively happening and they're visible, mean, that's really some of the key things that weren't around three, four, five years ago. So that's one thing that we often convey. And then we also convey through use cases and abstracted stories of what we've seen, how does this typically play out? And what does the world look like

Jason Hiner (19:52.091)
Mm.

Wasim (20:19.6)
where you don't have the right tools to see and mitigate narrative attacks versus when you do. Being proactive on these things can save the company and in some cases can save lives as well.

Jason Hiner (20:34.363)
Very good, so I'm sure you have some case studies and probably companies you could talk about, but if you want to keep it more broad, you can, but give me an example of like a company that had something happen to them that had this level of impact, and then what did you do to go in and help them?

proactively become better fortified and buy down their risk around these kinds of issues.

Wasim (21:03.404)
I can generalize one particular use case that has happened to a lot of different customers. In fact, like a number of banks that we work with had a lot of these, had a lot of boycott and activism groups, some pretty gnarly ones as well, meaning like potential physical threat to both infrastructure and people. And it often started around, you know, political or geopolitical positions around many of the conflicts that are happening around the world, right?

Jason Hiner (21:08.184)
Okay.

Jason Hiner (21:22.959)
Okay.

Wasim (21:31.67)
and those led to the physical riling up of in-person confrontations. And so being able to understand what that footprint looked like, whether they were potential state actors whipping up all of that, are these real people that are this?

potentially angry or galvanizing, organizing online or are they actually synthetic to like make us go in this direction or make a statement that further then gets us embroiled?

with the real people that care about these issues. See, a lot of this is about subterfuge and making an organization go in a direction that the campaign wants them to go in. You take the bait and you actually go into that direction, you could like step on landmines that the others want you to, the threat actors or adversaries want you to step on. And that creates a further problem. So in this case, if I just think of one particular bank, we were able to understand that this particular thing that's happening

Jason Hiner (22:04.571)
Hmm.

Jason Hiner (22:23.611)
Okay.

Wasim (22:31.696)
that you're about to respond to, which will bring much more focus and attention to the problem, is actually gonna move on in 48 hours because based on our look at the network graph and looking at other things that have happened with this same group of actors, it seems like they move on if you don't engage. If you do engage, you're about to become majorly embroiled. And sure enough,

they changed that strategy and they were able to avoid potential major issues. There are other times where companies have actually thought about pulling out of entire countries over various things of this nature. And then you find out that 95 % of this chatter is like state actor driven bot networks. So suddenly you're like, oh, my audiences actually are not upset about this at all. So why should we exit?

And that was another scenario where a company made a decision on our platform in hours, and it was a multi-billion dollar decision that they would have lost otherwise listening to the chatter, right? Which other systems can't differentiate.

Jason Hiner (23:35.206)
Yeah, so talk a little bit then about, you now offer multiple products. Talk a little bit about those different solutions, because it sounds like this is now a suite of solutions that help companies approach some of these.

very challenging problems and where there are especially now AI tools that can help you manipulate a whole network of chat bots that can help you create, you know, false looking or sorry, very real looking false information about you and those kinds of things. what are you all doing to work on those very different set of

challenges.

Wasim (24:23.032)
think the challenges, they're just evolved. The medium and the execution of them has changed. And the ways in which we address it through our products has not largely changed because we've been looking at the same problem through this lens for a while. So what I mean by this is when we first started, teams of humans were still generating content and creating content.

Jason Hiner (24:31.044)
Okay.

Wasim (24:52.194)
retweeting, sharing in networks, things of that nature through a lot of collaboration in the way a creative agency would. People, time, money, know-how, cultural resonance, et cetera, et cetera. Now, all of those teams, state actors, risk, I wouldn't say risk, but basically threat actors, just in general.

Jason Hiner (24:58.733)
Okay.

Jason Hiner (25:04.475)
Yeah.

Wasim (25:17.434)
They've been equipped with the same AI driven tools and ecosystems that we all live in today to accelerate our work. So they have now been able to accelerate their work.

So they use those same techniques, but they can teach agents and they can teach AI to sound like who they need to sound like. And more recently, actually build entire agent networks that can create threads of comments and things of that nature. So just like everyone else, you can do more with less people with better tech.

Jason Hiner (25:44.112)
Yeah.

Wasim (25:48.384)
So the platforms that we provide, let's say we take Constellation, this is a deep investigation tool that looks at how narratives spread through networks and actors. Now, sometimes you're not gonna be able to determine if it's actually agents.

However, you'll certainly be able to see patterns that deviate from what normal human conversation look like, because that's something you have a history of. It's like the bifurcation of how narrative spreads, that fingerprinting signature. You can then say there's something different happening here, and so it requires a different response. And so a big piece of this is being able to see how those synthetic networks work and how they move those narratives through. And being able to see that lets you decide how to handle each narrative attack differently based on the property that you see.

Jason Hiner (26:11.515)
Okay.

Wasim (26:32.432)
Consolation is our deep investigation tool. We have a product called Compass Context, which is another agent-based tool, which is a context checker. You drop in a link, video, an image. You need free-form text on something you want to know about. It does all that research for you, gathers it up, drops it to you in a one-liner or a paragraph with sources so that you can understand what's happening there. That is also integrated into our larger platform in case you want to analyze data inside that platform. We have something called Compass Vision, which

which detects deep fake images, synthetic videos, and it also lets you see how those deep fakes are involved in narratives that can create impact, which is how we, have a bit of a different twist there on how we think about synthetic media, because it's not just about whether it is synthetic media or not, it's about like, where is that traveling and how much impact is it actually doing within the information ecosystem? I guess the last thing is like, I would say is, right now, one of the biggest things we're working on are

taking all of our framework.

our API is something we call a unified context graph, because that's what we've built over the past seven years or so, using it to train up agentic systems that can actually be Intel agents that can tell you within the platform, recommendations, best approaches, and something that requires less subject matter expertise and less work from our own human Intel analysts, which we used to also deploy for a lot of companies in the past.

Jason Hiner (27:38.747)
Mm.

Jason Hiner (28:02.011)
Okay.

Wasim (28:04.086)
A lot of these things are now being trained up so that agents can actually do the work within these network graphs and within our system. And that's something that we're working on in the first half of this year.

Jason Hiner (28:15.673)
Very good. So that does bring me perfectly to what my next question was, you started this before the AI boom. Blackbird began, obviously, before the more recent AI boom, especially around generative AI. We know that that has empowered a lot of the threat actors in this space. What does it mean for your company? How has these new capabilities with agents, which I know you're already working on,

as well as the capabilities of these models which are advancing in very, very large steps. How's that changed the game for you all? How are you into that?

Wasim (28:57.646)
Yeah.

Yeah, so we were always an AI company from the early days. You my co-founder, for example, mean, he did his PhD in artificial intelligence way back before the boom, for sure. And so we had a pretty large machine learning and AI team since the early days. I think the biggest difference is using some of the more powerful tools to accelerate work across the company has been probably the biggest game changer for us, right? So using tools to be able to do, you

Jason Hiner (29:08.218)
Okay.

Wasim (29:29.16)
of like five engineers with one engineer, helping the product team do things on the product development side they never would have been able to do before. And across every division we use agent-based research, some of our own, some external, to just make everyone faster, sharper, and smarter.

And so we have a big push to use agents to not have to redo anything in the company. Create those skills that you can offload to agents and get your Iron Man suit so that you can accelerate in the way you need to, to keep up with the adversarial space too. So I think it's just us being in the space means you have to really entrench yourself in those tools for acceleration and precision. I think the only other thing though is

Jason Hiner (30:00.123)
You

Wasim (30:16.238)
The technology handles speed, scale, modeling. Humans still have to handle, especially for critical use cases like what we do, judgment, final messaging, and trust building.

Jason Hiner (30:27.589)
Hmm.

Wasim (30:28.47)
And I think there's a lot of companies and a lot of people that we speak to is thinking it's either or. So AI ends up becoming one early warning and signal processing, any decision support layer. That's the way to think about it. But let's still, the humans in the loop act before something escalates. But there's a lot of buttons to push still to make sure that the right thing happens in the end before you pull the trigger.

Jason Hiner (30:41.595)
Okay.

Jason Hiner (30:55.781)
Humans have to turn some of the knobs and then ultimately make the call on things.

Wasim (30:59.818)
I mean, if we're really talking about like the path forward for humans, they're pretty much gonna be like the knob turners. That's gonna be our primary function, right?

Jason Hiner (31:09.002)
Very good. How about do you have to create some of your own models to do some of this work? you, you know, there's a real push on like small language models and domain specific models or are there specific models that you use to get some of the work done that are helpful?

Wasim (31:27.374)
I mean, we have thousands of classifiers, right? But in terms of like small models, foundational models, it's just not something that we've really delved into because we have really created more of, again, like our core technology is almost like guardrails for those LLMs and models to be able to do things and reason in the right way. So I go back to our core technology being something like we call the unified context graph.

the relationship graph, and fingerprinting signature around this category, around these kinds of incidents. And so I think the key part there is without some sort of guardrail like that, it can very much be like garbage in and kind of garbage out. And by that, mean, it's like, kind of looks like something's happening, but to someone who's got a trained eye who then looks at it, they're like, well, this is...

Jason Hiner (32:00.856)
Okay.

Wasim (32:23.596)
This is just kind of like pedantic drivel, AI slop, whatever, right? Add in something like our context graph and suddenly that thing sounds like maybe an Intel analyst with 20 years of experience now, right? It's telling you the things that you need to understand. And then over time, as our users use that, those decision traces that they're making make the system smarter, right? For everybody. So that's the best way to think about that.

Jason Hiner (32:27.993)
Yeah, yeah.

Jason Hiner (32:49.4)
Okay.

Jason Hiner (32:53.699)
So your systems have to be able to look at any model and really irrespective of the technology or those things, it has to be able to evaluate what the data that's coming out of any systems.

Wasim (33:06.446)
Yeah. I think like most companies kind of have a model selector right now anyway to swap out the core foundational model, right? It's a component of either like cost or maybe some specialization, but most of it is about using your like internal IP, whatever that might be. For some people it's knowledge bases. For our space, knowledge bases don't really...

Jason Hiner (33:14.277)
sure.

Wasim (33:30.902)
It's more that knowledge and context graph, which creates the guardrail that you really need to reason in the way that you need to create value on the output from a narrative intelligence platform.

Jason Hiner (33:45.37)
I'm glad you brought up cost was seen because this is one of the things that I hear a lot from people working with the models is like, it's so expensive that and you're, you're making a lot of API calls to models. You know, you're, you can run up the costs of that really quickly. And it can become, it can become a real, you know, cost center for your business, essentially the AI inference, you know, costs. Have you seen that? And you mentioned having to use different models because of cost.

as well as other things, is that something that you all have seen in terms of working with these technologies and putting them to work in what you do?

Wasim (34:23.682)
You just have to be really smart about putting the controls in place and best practices.

in terms of like token usage and alerts around token usage and everybody needs to be kind of indoctrinated into the fact that, you know, they just have to be conscious of, you know, certain decisions that they make and what kind of impact there is. we have a lot of internal dashboards we've built around that as well, just to make sure that we have a good understanding of it. But ultimately for what you can do with it and how much it improves the product and accelerates like work.

I mean, it's no question that it's definitely worth it. Makes sense.

Jason Hiner (35:01.603)
Yeah, that's great to hear. It's interesting as I think about that, some of the companies that are doing a lot of the work around that and doing some of just what you described too, like putting dashboards in place for understanding your AI costs before they get out of control, a lot of them are based in New York. So it made me think about that because you all are based in New York, although the company you were at before Blackbird, your previous company, was actually based on the West Coast and Asia.

Asia, talk a little bit about having an AI company and a startup in New York. Was that a conscious decision and is it working out really well?

Wasim (35:43.406)
Yeah, absolutely. It was definitely a conscious decision. I my co-founder has been in New York for some time. I was in upstate New York for a while, really during the pandemic. But yeah, last company I was kind of all over the place on a regular basis. New York was where I wanted to have this company just for a number of reasons. One was client-based, right? Between comms and finance, right? Which are two key categories. First, this was a place to be. I've always been a

Jason Hiner (36:05.083)
Okay.

Hmm.

Wasim (36:13.36)
Northeast, really somewhere in New York person. And so, and I just think like what I saw from the tech scene in New York was really vibrant while still having a lot to do.

Right, you know, because it's just from perspective of where do we want to attract people and what kind of people do we want to attract to the company, right? Culture component of it was a big piece of it. I think it was really funny is I just, was invited to this event, the 30 years of Silicon Alley, which happened just on Friday night here in New York. And it's funny, I think like,

I don't know how many people they had invited to this thing. I think it was like several hundred. But like, I think like 15, 1600 people showed up. And the fire marshal actually shut the thing down for like an hour because they didn't know what was going on. But like what that tells you though is like that tech scene in New York is like, it's very different than it was like even five years or 10 years ago.

Jason Hiner (37:03.995)
Okay.

Jason Hiner (37:11.811)
You

Wasim (37:27.49)
Right? And so I thought that was really interesting. The scene here is becoming really, really pretty hot, like compared to like most other cities. I think there's a big discussion going on right now on the New York versus San Francisco decision. Right?

Jason Hiner (37:28.421)
Yeah.

Wasim (37:48.608)
So yeah, I love having the company here. Our team loves having the company here. We're a very in-person company. You know, we have our offices in Midtown and it's just like, you know, the energy helps, for sure.

Jason Hiner (37:48.635)
for sure.

Jason Hiner (38:00.024)
Yeah, you, it's your HQ, is the vast majority of your team still in New York? Do you also have an office on the West Coast or anywhere else?

Wasim (38:09.234)
We have an office, small office in Singapore actually. We have an APAC business that we're spinning up there. Have been spinning up for the last couple of years. Our entire C-suite practically is here in New York and the team keeps getting bigger and I have a bias for hiring in New York now as well just because it's great to make use of the office space and have that...

Jason Hiner (38:12.421)
Okay.

Wasim (38:32.567)
collaborative feel. Otherwise, think one of the, we're never gonna be a everyone get back into the office type company. We've always been hybrid even before it was a thing, you But you can't really beat in person when it comes to like some of the bigger lifts, right? And I think the biggest thing that is hard to see,

when you're not all in the same place is you can't really see, the teams can't see the efforts that all the other teams are putting in, right? Because they're removed from the day to day. You can't go look at the bullpen, look at what sales is doing. You can't go see what the engineers are doing. And so, you know, I think that part of it is, you know, we bring people together as often as we can, whether it's for off sites or whether it's for hackathons or whether it's for sales kickoffs, any opportunity we get, we want to do those in person.

Jason Hiner (39:04.613)
Sure.

Jason Hiner (39:24.379)
Very good, hop out.

You know, one of the things people love to ask CEOs, because of the diversity of the ways CEOs spend their time, like I'd love to ask you, what do your days look like? Like how do you spend your time? What are some of your top priorities? What are the ways that you think about moving your company forward in this landscape that is evolving so rapidly, especially in the space that you're in?

Wasim (39:43.853)
Yeah.

Wasim (39:54.968)
You know, I'm most pretty fortunate to have like a really incredible Chief of Staff that I've been working with for, you know, four plus years. And that also helps in keeping my time monitored in terms of how I manage my time. And it's all about leverage, right? So the thing is like, how do I utilize my day, my week for maximum leverage? Which means, like, just to give you an example, like, so my typical day starts like 3.30 in the morning, right?

You know, I don't require much sleep. never have. So I do all my deep work between like 3.30 and 7 a.m. Right. That's like reading all the things that everyone in the company or external needs to tell me. Maybe reading notes on a podcast I might have to do later. All that stuff gets done in the early day, early mornings before I have all of the meetings and the Zooms and the questions and requests. You know, we have Thursday, no meeting days, no Zooms, no meetings unless you have something. It's very important.

Jason Hiner (40:30.861)
Okay?

Jason Hiner (40:39.022)
Yeah.

Wasim (40:52.556)
let everybody do their deep thinking, deep work. And then like, you my days are broken into things like that where I might not have any one-on-ones whatsoever. So it's broken into kind of like work streams. And, know, again, my chief staff and I, think about like energy levels as well, right? Like, okay, if nine to 12 energy is high, then you do that, like the brainstorming and the soundboarding and the decision-making, and then you save emails and slacks and everything else for the latter half of the day, right?

So I think if you don't do things like that, you can't really create leverage. You're kind of just doing all things all the time as they come. So some discipline is necessary there. All the tricks and trades, calendar blocking, using AI agents for roll ups on information. It's a big piece of it to make sure.

Jason Hiner (41:30.544)
Yeah.

Wasim (41:44.05)
At least for me as a CEO, I'm always thinking about what am I missing? What more could I have squeezed out of the day or the week? And so we even review at the end of every week, literally on a pie chart. Okay, here's like personal time, one-on-one time, external meetings, commercial, deep thinking, like how do we do this week, right? So yeah, that's something I'm a little bit very specific about to make sure that time is leveraged.

Jason Hiner (42:12.751)
Yeah, that's really smart. You mentioned that inside the company that you're using AI tools to enhance what people are doing, productivity-wise. How about you? What AI tools do you use that are helpful to you?

Wasim (42:30.518)
I mean, code of course, you know, is something that I absolutely use and I have a whole bunch of really interesting agent, sub agent networks that I use to help me think about problems, everything from like, you know, digital twins are very smart people I might want on my advisory board to, you know.

having a lot of ingestion around a lot of our key content so that there's a lot of memory retention when I'm talking to an agent, they understand the business, understand how I think.

You know, my chief of staff, for example, uses a ton of agents to roll up like my daily briefs, all of the to-do items that are happening in our different transcribed records so we can look at actions, decision items from different meetings and see that they're getting done. So for me, it's like office of the CEO is like hypercharged by all kinds of tools. Lindy is another really good one that I like. Lindy is like a agent network that you can build and kind of coordinate to do different things like research.

Jason Hiner (43:23.483)
Mm.

Wasim (43:32.174)
I think my go-tos are Claude Code for specific things and then Lindy Agent. I still use some ChatGPT as well. But man, if people are still using just ChatGPT, they're not exploring things like Claude Code and skills within Claude Code and whatnot. It's already behind the curve, although every day feels like you're a little behind the curve as it relates to all of these tools.

Jason Hiner (43:57.903)
For sure, those are some great takeaways to share. Waseem, thank you so much for your time. It was a pleasure having you on the show.

Wasim (44:02.998)
Absolutely. Yeah, thanks for having me.