The secret sauce to your sales success? It’s what happens before the sale. It’s the planning, the strategy, the leadership. And it’s more than demo automation. It’s the thoughtful work that connects people, processes, and performance. If you want strong revenue, high retention, and shorter sales cycles, the pre-work—centered around the human—still makes the dream work. But you already know that.
The Unexpected Lever is your partner in growing revenue by doing what great sales leaders do best. Combining vision with execution. Brought to you by Vivun, this show highlights the people and peers behind the brands who understand what it takes to build and lead high-performing sales teams. You’re not just preparing for the sale—you’re unlocking potential.
Join us as we share stories of sales leaders who make a difference, their challenges, their wins, and the human connections that drive results, one solution at a time.
Jamie Brown [00:00:00]:
AI may be new to some, but the core principles for security and data protection remain unchanged. It's essential to know who has access to your data and how they're using it. For the key security team is adaptation, not avoidance. So finding ways to lean into and lead the way to integrate these new technologies such that your employees and your products can use them while keeping the data secure.
Jarod Greene [00:00:21]:
You're listening to The Unexpected Lever. Your partner in growing revenue by doing what you already do best, combining your technical skills with your strategic insights. This episode was taken from LinkedIn Live series about sales engineering with our CEO Matt Darrow. We hope you enjoy.
us
Matt Darrow [00:00:38]:
Hey everybody. Thanks for joining us today. I'm Matt Darrow, Co-founder & CEO of Vivun. I started Vivun after a career running global sales engineering teams at private and publicly traded companies. And I'm here today with Jamie Brown and Jessica Siclari. You guys know a little bit more about them from those brief introductions. But to take it into the topic today around security, actually, Jamie and Jess, can you give the audience a little bit more background about yourselves, especially on the professional side? And Jamie, why don't we, why don't we start with you?
Jamie Brown [00:01:09]:
Thanks, Matt. Glad to be here. So I'm serving as CISO here at Vivun. I'm responsible for leading our security program, safeguarding our company data and ensuring the security of our products and services for all of our customers. A bit of background. I've been with Vivun for about four and a half years. I started back when it was a Series A company and prior to that I've had 20 years experience in cybersecurity, mainly in the federal space, as well as in the financial services to include Bridgewater Associates. So with that I'll hand it over to our colleague Jess.
Jessica Siclari [00:01:38]:
Hi. Thanks, Jamie. Thanks for having me, Matt. So I'm on the governance risk and compliance team here at Vivun. I help make sure that we maintain the highest standards of security, privacy and data governance. Every year we successfully attain our SOC 2 type 2 and ISO 27001 certifications. And I think a lot of that is thanks to the rigorous security policies, privacy practices and the governance frameworks that we've put in place. I've been at Vivun for a little over three and a half years.
Jessica Siclari [00:02:07]:
So I've been a part of this security program as a number of new IT AI technologies have hit the space.
Matt Darrow [00:02:15]:
Well, thanks guys. And you two are the perfect panelists for today's discussion all around security because we're going to talk about not only data production and security, but its relevance and especially how it's changing and evolving because of all things agentic AI and your guys backgrounds in cybersecurity, physical security, the government, financial institutions, you have this wealth of knowledge that you've amassed over 20, 30 plus collective years that we're really excited to share with the audience today. And specifically amid this rapid pace of innovation, how can you out there, the audience, stay on top of potentially not only all these evolving threats, but but also learn how to gain security, sign off for your own deals as your own companies are becoming AI driven in this new sort of realm of technology. So if you're in sales or presales and you're trying to keep up with AI's increasing impact on your role in your world, you are in the right place. Jess, let's start with you. When it comes to agentic AI, right, there's a lot of newness in the tech, but it feels like there's also a lot more discussion in the spotlight sort of placed on security and privacy. So what, why does that feel like a big shift that's all of a sudden sort of occurring? That security is really the star of the show when it comes to not only customers purchasing solutions, but then also vendors need to be mindful of how they're actually getting their solutions out to market.
Jessica Siclari [00:03:38]:
Yeah. So while AI feels revolutionary, and it is revolutionary from a technological perspective, from a security perspective, we're still applying the same core principles that we've always relied on, right? So access, control, encryption, defense in depth. Whether you're moving from an on prem data center to the cloud or now adopting AI, the fundamental questions still remain the same. So that's where's my data? Who has access to it? How is it protected? And our governance principles, so protecting data, its use, the systems that store it, those don't change. So what is evolving is that we apply those principles to AI specific risks, like AI model training, for example.
Matt Darrow [00:04:22]:
Well, good point. So again to your point, I like the sort of the keystones of where's my data, who has access to it, how's it protected? Right. These things are sort of a lot of the same paradigms and controls, but now you're looking at those with some of those new risk factors. So Jamie, you said to me offline that the role of security is so much about sort of risk management, right? What are some of the new risks that you're seeing in the world of AI and especially agentic AI.
Jamie Brown [00:04:46]:
Yeah, that's right. So you know, decisions in cybersecurity mainly come down to risk management. The first area is around data protection and the risks associated with how that data is being used and if it's being used to train the actual AI models. So unlike traditional SaaS, where data is just simply stored, sometimes processed with AI, that the data can actually be used by the vendors to train their models. So this means that data isn't just stored, it can be incorporated into the AI learning process and potentially accessed by third or fourth parties later on. So if the data training isn't managed properly, this creates huge risks. For example, company proprietary information, maybe sensitive information, or even customer data may actually become part of that public AI model later on. Another risk that we see is related to the flow of data as relates to the constructs around AI.
Jamie Brown [00:05:37]:
So customers and data owners really want to know and they care about their data and how it's being processed and they want to know how AI is being used to inform the actual outputs associated with it. I'll highlight a couple examples here. So the first one is around the risks of input. So there's an emerging threat around data injection attacks where malicious inputs could corrupt the underlying AI system itself or even potentially affect the queries since it affects the actual results. The next could be around the outputs. So this brings to the risks of AI hallucinations. We had a former colleague talk about this extensively in terms of how AI can hallucinate, give inaccurate results or misleading outputs. That's why at times you see that small disclaimer in the box that says, look like I can make mistakes, please verify this information before being used.
Jamie Brown [00:06:28]:
So as you can see from these few examples, the technology service, it requires transparency. And transparency is critical for any company that adopts and builds these technologies, such that when the company builds it, they need to communicate how your data is actually being used at every stage of the pipeline to include the input as well as the output.
Matt Darrow [00:06:49]:
Well, and you mentioned some of the. I think what, what a lot of folks are concerned about is any, about your company, your customer, your proprietary sensitive information, sort of this unintentionally becoming part of, you know, public AI models or for folks that other people can tap into that you weren't aware of. So Jamie, what's a headline that you hope you never have to see from a security point of view in this agentic AI world?
Jamie Brown [00:07:15]:
Sadly, it's already happened to some companies and that's data loss. Data loss is the biggest nightmare associated with this. Underscores why you need to manage risks and why it's so critical. You know, just as a quick side story, you know, there was a large electronic manufacturing company employee who uploaded proprietary information to the very early versions of ChatGPT and that actually became part of their training model and was eventually public. So that was an unfortunate data loss for that firm.
Matt Darrow [00:07:41]:
Yeah. How about from your side, Jess?
Jessica Siclari [00:07:43]:
Yeah, I would second that. I think the wild west approach that some companies take, which is implementing tools without guardrails, is dangerous. Another headline that comes to mind though is also around the use of AI to deceive people. So even seemingly innocuous examples like the fake videos on social media, I just really don't like. And it gets even worse when you think about companies that aren't transparent about where or how they use data. And so then the users are left to figure out what's real or what's not.
Matt Darrow [00:08:16]:
Well, and we were saying too, Jess is like, well, you have this sort of this evolving threat landscape and that there's also at the same time not just new threats, but there's new regulations. When I think about regulations, especially being here in the States, the EU always comes to mind about doing things a little bit differently. I know you stay close to this as well, Jess. Like, how is this different? What's going on on the EU front? Are they all aboard the AI train or are they sort of radically different in their adoption from the security side?
Jessica Siclari [00:08:42]:
So they're definitely onboard the AI train. So the EU's AI act, it addresses AI specific risks and it sets in place guardrail regulations with an emphasis on the ethical and trustworthy development. So it is more overarching than what we have here in the us. What we have here in the US is just individual states have enact legislation. So it's worth noting that regulatory differences, they create challenges for global companies. So proactive governance is key and then understanding those frameworks and it ensures compliance and then it in turn builds trust.
Matt Darrow [00:09:22]:
Let's keep carrying the trust topic forward because when it comes to even you guys evaluating AI tools and vendors, because we use a variety of tools ourselves internally here at Vivun, but a lot of people are going through this. Security teams are evaluating vendors, sales organizations are trying to sell their tools into other companies and a lot of people don't know what they have, what they need, or even what they should trust. So from your guys chair, you know, you've established a really, really high bar for our own internal security views. So in your opinion, what are some of the red flags and even non negotiables that you guys look out for when evaluating an AI vendor that I think would be really helpful for our audience to understand who are in a space where their own products and services that are getting more and more souped up with AI they're running into and they're dealing with folks like yourselves who are actually going to do the scrutiny and the approval process. So, Jamie, let's start with you. Talk to me about your bar.
Jamie Brown [00:10:17]:
So as weights the third and fourth party risk, first and foremost, it's critical to understand the data remains ours, right? Secondly, like the vendors must guarantee that our data is not being used to train their models. And the third is, you know, we place a really high priority on protecting our intellectual property such that by using their service, it doesn't give them any ownership over the insights or the actual outputs associated with it. So additionally, it's really important to be cautious of using free tools or free versions of AI tools, even open source tools, perhaps the one that kind of broke the AI world or the Internet this week called Deep Seq. You know, many times there's a reason why these tools are free. The price for admission you're giving is basically the data usage, right? And associated with that, these free platforms, they could lead to significant data loss as users might inadvertently share sensitive information and the consequences are very much unpredictable. So the good news is it's not all doom and gloom. The good news is with most paid AI services like OpenAI subscription models, you often receive legal measures or data protections in place to help mitigate some of these risks around data loss.
Matt Darrow [00:11:31]:
How about for yourself, Jess? Red flags for you.
Jessica Siclari [00:11:34]:
Yeah, so I think a lack of transparency is another red flag, right? So if a vendor can't clearly articulate how they're protecting our data, then it's a no go. Either it means that they're hiding something or actually, even worse is that they just don't know. So neither is acceptable.
Matt Darrow [00:11:50]:
You guys have the high bar for what comes in the door, even at Vivun. But like, let's take this on its head. On the flip side, what are some of the things that you guys are proud of from your own chair and your own teams and security here at Vivun that we've put in place that you feel strongly about our approach? And Jamie, let's start with you there and then Jess after.
Jamie Brown [00:12:08]:
Yeah, for me, security has always been a top priority at the beginning of the company. We've built an enterprise grade security program mainly in house, and we've designed it to protect our company data as well as our customer data. The collaboration we have with engineering has been key to ensuring that all of our products and services, including the AI features that we develop, have the appropriate security controls in place and we have the compliance measures in place as well. My goal has always been to how do we empower employees to leverage these new technologies and tools while we enforce the highest level of security to ensure we have safeguards for the sensitive information?
Jessica Siclari [00:12:45]:
Again, I'd add that we're proactive, right? So we've been discussing regulations like the EU AI act before they were finalized. We really keep a good pulse on the legal, regulatory and the threat climate. So compliance is baked into our processes and I think that this really keeps us ahead of the curve.
Matt Darrow [00:13:05]:
And my favorite closing question, the one thing so we covered a lot of ground today, different reasons around what headlines you don't want to be a part of. Why is security evaluation a little different in AI has to do with so much with how these models work and the data and the tools work. Your guys bar how we do things at Vivun with respect to this, what's the one thing to remember from today's conversation? What do you want the audience to walk away with?
Jamie Brown [00:13:26]:
Yeah, I mean for me it's just to reiterate, right? Like AI may be new to some, but the core principles for security and data protection remain unchanged. It's essential to know who has access to your data and how they're using it. For the key security team is adaptation, not avoidance. So finding ways to lean into and lead the way to integrate these new technologies such that your employees and your products can use them while keeping the data secure.
Jessica Siclari [00:13:53]:
Yeah, I definitely agree with that. And I would say two other things that stand out. It's transparency and trust are just non negotiable in this world. Companies that prioritize these I think will thrive in the AI era if a vendor is not disclosing how they use and protect your information. It's a red flag and I think we should feel empowered to ask the right questions about it.
Matt Darrow [00:14:16]:
Well, thanks guys. Jess, Jamie, if you want to learn more, follow us on LinkedIn. You can also subscribe to Vivun's podcast, The Unexpected Lever, where we have continued conversations on a variety of topics, all focused on how B2B sales is changing for good and look forward to seeing you guys for the next time. Thanks everybody out there. Enjoy the rest of January. Good luck closing the quarter in the year if you're on the cycle and we'll see you soon.
Jarod Greene [00:14:41]:
For additional resources, check out Vivun.com and be sure to check out V5, our five-minute soapbox series on YouTube. If there's a V5 you'd like us to talk about longer, let us know by messaging me Jarod Greene on LinkedIn.