The Unexpected Lever

How is AI transforming the role of sales engineers, and what does this mean for the future of B2B revenue streams?

In this episode, Matt Darrow, CEO and Co-founder of Vivun, and Joseph Miller, Co-founder and Chief Data Scientist at Vivun,  share how AI will affect sales engineering teams. They explore concepts like the integration of top-down and bottom-up approaches and how AI technologies are fundamentally altering the nature of labor in B2B sales. You’ll also learn the critical skills sales engineers need to have to stay competitive in an increasingly AI-driven market.

In this episode, you’ll learn:
  1. The Evolution of AI: Joe takes us back to the fundamentals, explaining the top-down and bottom-up strategies that have helped AI evolve over the decades.
  2. The Impact on Labor and Productivity: GenAI disruption is different from previous technological advancements, meaning they can’t be mitigated by retraining alone.
  3. The Future of Sales Engineering: Sales Engineers will have to adapt to new AI tools, leveraging them for automation and strategic advantage.

Things to listen for:

(00:00) The evolution of AI

(01:14) Merging top-down and bottom-up approaches  

(04:22) We underestimate the advancements in AI technology  

(06:51) AI's impact on the future of work and labor disruption  

(13:55) How partial automation is shaping the future workforce   

(18:42) Usability challenges in the early stages of AI integration  

(20:08) Revolutionizing sales engineering with AI in solution design  

(22:27) Essential skills for sales engineers to adapt to AI

(25:48) Automating tactical work with AI to free up time for innovation


What is The Unexpected Lever?

The secret sauce to your sales success? It's what happens before the sale. It's the pre-sales. And it's more than demo automation. It's the work that goes on to connect technology and people in a really thoughtful way. If you want strong revenue, high retention, and shorter sales cycles, pre-work centered around the human that makes the dream work, but you already know that.

The Unexpected Lever is your partner in growing revenue by doing what you already do best—combining your technical skills with your strategic insights. Brought to you by Vivun, this show highlights the people and peers behind the brands who understand what it takes to grow revenue. You're not just preparing for the sale—you're unlocking potential.

Join us as we share stories of sales engineers who make a difference, their challenges, their successes, and the human connections that drive us all, one solution at a time.

Transcription
VIVUN | THE UNEXPECTED LEVER | THE IMPACT OF AI IN SALES ENGINEERING

Episode Transcript
This has been generated by AI and optimized by a human.

Matt Darrow [00:00:00]:
I'm Matt Darrow, co founder and CEO of Vivun. I started Vivun after a career running global sales engineering teams at private and publicly traded companies. And I'm here with Joe Miller. We know he's spicy, not mild, but he's also Berkeley physicist, Cornell PhD, Yale MBA, entrepreneur, and most importantly, dad. Joe, you worked at Bridgewater Associates with Ray Dalio. He turned his management teams decision making process into an AI expert system. You're an entrepreneur who's built everything from golf clubs to hedge funds. You've been researching AI for the last decade.
Matt Darrow [00:00:36]:
You're also Vivun's Co-founder and Chief Data Scientist. And today we're going to talk about the impact of AI on sales engineering teams. We know that AI has created a new transformational wave, and we also know that sales engineering work remains critical for every company's B two B revenue stream. Yet there has been little discussed envisioning the impact of AI on this profession and the sales team in which we operate. So Joe, I want you to kick us off Genai AI, we know it. Big new wave. What makes this wave so different, given how long you've been actually tracking and studying and applying these technologies?

Joseph Miller [00:01:14]:
I think there are three major things that are sort of happening all at once right now. The first one is a little bit of history. I'll try to be brief about it, but it's helpful to remember that over the last 50, 60, 70 years in AI, all the way back to the fifties, there's been two efforts in the space. And this is a reductionist way of thinking about it. But it's helpful that there's a top down way of thinking about AI, and this is the expert systems of the seventies. This is semantic reasoning, the if then rules that people are familiar with. And then there was a bottoms up approach of like, well, instead of positing the rules, how about if we consume all of the data and try to derive what the structure this thing is? Initially it was like much more rule driven, and then people started doing this, this bottoms up approach, but it had challenges. We didn't have enough data.

Joseph Miller [00:02:02]:
You need a lot of data to be able to pull out these sorts of patterns. You need a lot of compute to do that. And through the eighties and then later the nineties, there was basically two things we called the AI winter, which is a hype cycle unsimilar to the one that I think people are wondering, are we in that type of thing now? And they both ended up collapsing back down because of these constraints. There wasn't enough compute that we got enough compute, but there wasn't enough data. And then around like the mid, early two thousands and 2010, I'd say it really started to catch fire because you had a lot of data. Thanks to the Internet, you had a lot of compute. Neural networks were really starting to find their way and get really powerful. And then what was the top down way of doing research? Got what I think is like one of my favorite figures of speech of good old fashioned AI.

Joseph Miller [00:02:48]:
That's why people refer to it now. It's like old timey, old timey. But what was going on in that space over the last ten years or so was really interesting. There's a lot of really interesting work being done in this area that people tend to refer to as knowledge representation. But these graphs and graph theories and all of this type of stuff of how do you relate objects to each other through nodes and edges? And you kind of have these really powerful ways of representing domain knowledge. It was overwhelmed by the success of neural networks and things like that. And then llfs came along. And I think what really happens here is that they allow you to bring both the top down and the bottoms up approach together for the first time and be able to engage with that knowledge representation in, say, a graph.

Joseph Miller [00:03:32]:
But there's many other forms. You can do it, but you can engage with it using natural language, and that really opens up an enormous amount of possibilities. Whereas prior, you might have had the domain knowledge, but it was just too rigid to engage with it to be practical for businesses. So I think that is the number one thing that is happening that I see happening right now is this combination of bottoms up approaches getting really, really good with LLMs and et cetera, transformer models and such. But also this top down value of like, oh, we have a lot of understanding of how to represent domain knowledge, and now you can bring these two things together and do some unprecedented stuff. So that's like, number one thing that I think is happening in the AI space. Number two is that we are once again sort of on this s curve, the exponential rise of this s curve. And already I love this part.

Joseph Miller [00:04:22]:
I was like reading the news this morning and I saw this headline that was like, has AI plateaued? And you're like, this is like, humans are insatiable. Like, we just can't be pleased. A year ago, GPT 3.5 or maybe a 0.5 ago or so was like revolutionary, but like, actually was like, still not that good. You weren't imagining like, hey, we're going to be, you can't roll an agent out with this and like think that you're going to shake up the world with it. You know, it got basic things, basic math wrong. I mean, math is still a challenge, but it was really bad then, all kinds of stuff. Like it couldn't do references well, all kinds of things. And that was just a year and a half ago.

Joseph Miller [00:04:58]:
And now we're like, a lot of those challenges aren't even things that people talk about anymore because, you know, the new, the new models, Claude and, you know, GPT 40 but also, you know, the open source models that are coming out are really good. They're really powerful. And so we're seeing, I think, things that we expect to happen in a year or two years happening in like six month cycles. And humans just aren't very good at reasoning exponentially like we're linear thinkers. And so we send the thing like, oh, this is maybe a five to ten year thing, and you're like, that's on your doorstep, like in, in this year, right? So that's the second thing I think I see in the market a lot is that people just underestimating what experimental change looks like and what it feels like. Which leads me to my third thing, which is I read about this a lot in, like, mostly finance and economic journals where people are saying, oh, it's just, you know, AI is disrupting software engineering or it's disrupting lawyers or it's disrupting, like the truck driver for, with autonomous vehicles or whatever. And people tend to, like, think a little bit more sector based, like, oh, it's disrupting this particular role. And this is a very different thing.

Joseph Miller [00:06:03]:
And now lots of people might disagree with me on this, but when I look back at, like, technological disruption, you know, over the last few hundred years, you just go back to, like, spinning Jenny's and things like this. There's a good model in economics called the Cobb Douglas model, and it's a way of thinking about, like, when there's this technological disruption, capital becomes more productive for firms. And so people tend to rotate out of more expensive labor into the capital. Then you hit diminishing marginal returns, and that rotates back into more capital people or, you know, more labor to run the machines and et cetera, et cetera. And there's sort of like this spring effect that happens, that you crush one job, but then those people go into another role and that was underutilized with labor, et cetera. And that's sort of like a basic model of how these things are flowing. But this case is different because you're not disrupting a single sector. I think we're disrupting the concept of labor generally.

Joseph Miller [00:06:51]:
So it's a lot more like stretching a spring and then deforming it. And you're like, this isn't coming back. The substitution out of some role of labor to an AI agent that is doing that knowledge work isn't going to come back. You're never going to have the marginal costs of an AI machine doing that same knowledge work from, from a human. So this is like very, very disruptive. And like, I hear a lot of people solutions being like, oh, it's going to be retraining and things like this. And to an extent that will certainly be true. I think a lot of, like, software engineering, you already see a lot of retraining happening.

Joseph Miller [00:07:22]:
Certainly we are a vivid, too. We're getting all of our engineers and all of this type of stuff regardless of where they are in the stack. That's probably happening a lot of places. But you're not going to retrain the truck driver to be like, programming convolutional neural networks. Like, that doesn't make any sense. So I think that this labor disruption is going to be very significant. My opinion is that people just start reasoning about that.

Matt Darrow [00:07:44]:
Folks that might, like, hear that and be more on the fearful side of like, hey, what does that mean when labor is being disrupted? Then there's a, there's maybe another school of thought that, you know, folks are sitting there saying, well, with AI allowing me to accomplish work tasks, work products, things that I, that now I don't need to do, can I just be more productive? So instead of maybe labor being changed, everybody's just going to be more productive. So in your mind, are we going to be just have a new scale of productivity, or will jobs actually go away? Like, how do you think about those relationships?

Joseph Miller [00:08:16]:
I think it's both. People are certainly going to be more productive. I mean, we anchor to software engineers, we're a software company, whatever. But, you know, there's a lot of, a lot of benefits from any software engineer just being able to be like, oh, where is this code? The bug in the code is like, most of the time that you're spending programming and like, you know, LLMs are really efficient at helping you find those things. So you're getting a lot of productivity out. That kind of idea is, is general, right? If you're a writer that, you know, like, there's a saying, like the first, the hardest part of a book is the first sentence, right? So like getting going on anything has a lot of activation costs no matter what your job is. And certainly these things are very good at being able to do that. So I expect that you will get productivity gains across the board for everybody.

Joseph Miller [00:09:00]:
I think the challenge though is that every unit of productivity is not equal. There is diminishing marginal returns at any firms to its amount of production that it can actually have and convert into value for the company. And then on top of it, not all employees are equal too. So you sort of have people that are really, really good, like, I mean, software engineering called like the ten X engineers and the ninjas and whatever else silly names we give people. But like, there's usually sort of a power law of productivity across labor. And if you ten x everybody, then that means that your best person is now going to be way, way, way up here and even your worst person is now much better. But that doesn't mean that there's ten x more productivity that the company can actually absorb or the market will absorb from this sale of services and goods from that firm. And so you end up being like, well, if my best person is now ten x better, are they able to do most of the value generating work? And if so, then your bottom tail of this labor supply is actually a negative expected value.

Joseph Miller [00:10:03]:
I think it's disingenuous of AI researchers and such to say like, hey, everybody's, it's just going to get all better. I think people have to start thinking very deeply about like, oh, my work is being disrupted. My labor, my ability to convert my labor to value is being disrupted. So I need to start thinking about different ways of operating and getting ahead of that so that I prepared for those moves.

Matt Darrow [00:10:23]:
Yeah, well, I can just picture that visual too. Like if your company's like output and throughput has some level of ceiling, that might increase a little bit. But again, you look at the productivity of everybody else that then gets massively scaled. Your like throughput of what you can drive into the market isn't magically going to get 100 x larger at the exact same time. So those dynamics will change. You mentioned, you know, engineers a few times and that's like sort of one type of role. What types of roles? What type of labors? You mentioned, who do you think is likely to be impacted first?

Joseph Miller [00:10:50]:
I sort of think about this problem. Like if you imagine a start starting a business here, you start a company and it's like, what do you have? You have like a technical person, your product person, and sometimes you have a salesperson. Usually those people are different people, but the early part of a company is just those steel pillars. And as you grow, you start to like the is really evolving to support those two structures. And then eventually you get to a space where you're like, well know my salespeople aren't very technical. My technical people can't talk to other people. So you're like, I need, I need this hybrid role and you can afford it, and it makes sense from a value generation process. So you're like, okay, let me find some technical people that can talk to other people and then my salespeople that, you know, maybe know something a little bit more technical in their disposition and let me go create this hybrid space.

Joseph Miller [00:11:35]:
And that's sort of what the SE role is. Now the problem is, is that that's how that specialization tends to occur, just like from that logic. But then when you unwind in an unwinding of that labor value, then it's going to unwind in the same direction. You're not going to get rid of sales and your product team first. You're going to end up probably attacking these hybrid rules first. These are the most vulnerable decisions. And I think we actually see a lot of that in, it wasn't a recession, but nobody was enjoying 2023 enterprise sales. So you kind of look at like, what did companies do when budgets got got constrained? What do they do when they, when spending got cut off or investments got cut off? Like where did the money end up going? It rarely got pulled from product or from sales.

Joseph Miller [00:12:19]:
They tend to get pulled from these, from these like more specialized hybrid roles. So I think that like these type of roles are going to be under a lot of fire first. You know, that's why we're building the products that we are, is to augment this space and pivot this, this role into like where the value generation is.

Matt Darrow [00:12:36]:
Well, as your point on, like the hybrid role, because you mentioned the sales engineering example, I would see like a QA engineer sort of follows this pattern as well. Like folks that are involved in the translation between two like specialties or two groups, these are the ones to your point that are sort of potentially might be disrupted first or at maybe a different pace.

Joseph Miller [00:12:55]:
I also think just on one point on that, like people are often think that like you need to have 100% replacement. You don't, you can get, you know, 60, 70, 80% replacement and then say, well, the other 1020, 30% that is just going back to those pillars. It's not an all or nothing thing. You're like, if I can get most of that work, of that hybrid role done, then I can, like, offload what isn't done back to the pillars that would have originally done it. So I think that that mechanism is likely to happen a lot sooner than people are thinking.

Matt Darrow [00:13:23]:
Yeah. In terms of the how and how some of this might actually play out, you talked about just what you've seen over the last decade in your own work, but then also just the general history of sort of these two competing factors of AI. Top down, bottoms up, now converging. Let's just talk a little bit about the LLMs, because I think people are skeptical that, you know, software engineering work could be accomplished one day, you know, five years ago, six years ago, when it was so hard to hire and retain engineers. It's almost strange to even think that we're having that conversation now. Why should people believe, why should it be seen that, hey, actually the ability to, like, go write code, write software applications is actually something that could be massively disrupted, too.

Joseph Miller [00:14:04]:
The skepticism is reasonable. I think that you've had a lot of these hype cycles, and then I think that humans have a natural disposition to think, well, the nuances of my work and the craftsmanship of my work is irreplaceable in some sense. And there is something to that as a way. As an example, there's this interesting study where AI is actually better at detecting lung nodules in cancer patients than radiologists are. However, when people asked, would you rather prefer the diagnosis of the AI or the radiologist? It's overwhelmingly the radiologist. And people originally thought the reason for this was because they didn't trust the AI. And there was a follow up study that tried to suss that out, and it actually revealed that the reason that they don't choose the AI is because they think their condition is special. They think it's unique.

Joseph Miller [00:14:50]:
And so humans have a very strong instinct to do this, is like, we just think that the way we do our work and the way we perceive the work is unique and can't be replaced in this way. And so I think that that probably lends us to overestimate how irreplaceable our labor is. But then the second thing is that software programming specifically is a language as well, and it's also a highly structured language, so it sort of lends itself to the strengths of a language model. And then, of course, like, there's decades of stack overflow that we've been like, basically training it, and then there's git commits and all of this stuff that tell you a lot about the quality of what's being accepted as good code and not. So there's just like by defacto, a lot of good data for that sort of problem that I think will be, will lead to massive disruption in that space. And then if you realize that, like, well, disrupting software engineering is disrupting all companies, like software engineering is the core engineering source of lots and lots of fields. So it's very disruptive.

Matt Darrow [00:15:51]:
Yeah. Even when you cited your reference this morning to this s curve or to folks feeling like, oh, is there a plateau? I think some of that might come in the form of people feeling like there's a lot of energy, excitement, conversation about AI, a lot of buzz, but not a lot of tangible results yet about things massively changing. Why do you think some of the early approaches aren't living up to the hype?

Joseph Miller [00:16:13]:
Lots of people will have views about, you know, maybe it's just a handful of transformer layers away or something, but my view is that if you think about how elements are created, again, they're like this bottoms up structure. They're consuming the training data and driving, like representing the knowledge of that in a particular way. But if you think about what the knowledge that it is representing, it's like a co occurrence knowledge. So it's like, hey, this language is being used commonly in these pockets, and so it's co occurrences that you're modeling. Now, that sense will work pretty well because we use language to represent real things in the world and represent ideas and logic and theory and such. So it's a good proxy for reasoning, but it's not reasoning. I think there's a lot of confusion out there. Like people will say, oh, it understands causality.

Joseph Miller [00:17:02]:
No, it does not. 100% does not understand causality. So now if you want to say, like, well, now, why is it nothing? Delivering the practical values now and disruption, I think that's a big part of it, is that this top down reasoning, this domain knowledge, is not actually implicit inside that, inside that model, it's actually not being represented well, leaning on this sort of like these knowledge graphs and other ways of representing domain knowledge is never been more important. You don't really want that sort of average linguistic answer to doing some particular problem. What you want is the experts answer, and you want that to be reliable and dependable and accurate, et cetera. It's like you're trying to make business decisions that can't afford hallucinations and things like this. And to do that, you really do have to have like a more computational approach of the way that the logic is represented. And that's got to come top down.

Joseph Miller [00:17:49]:
And like I said earlier, that the LLMs are fortunately allow us to engage with such a representation so that we can naturally communicate with this thing, but then have the domain knowledge drive the answer that is actually the high bar answer.

Matt Darrow [00:18:05]:
I like that point around LLMs being very powerful in terms of how they operate with language, but the inability to reason is whats going to hold these things back from actually doing and accomplishing core really high value work. Theres also a form factor piece. I know you and I talked about this before too, where a lot of approaches to using AI, embedding AI, theyre taking the same form factors as traditional sacks. But just like I the human, am interfacing with my machine. And I think that that's probably fraught with parallel by why people don't feel like AI is living up to the expectation. Talk about that. Like, where do you see that going? Is it how you interface?

Joseph Miller [00:18:42]:
It's a very early technology. We're not really sure how do we want to engage with it where it's good, where it's bad? Like, not all wrong answers are equally painful. There's all kinds of things like this that are really a UX problem. But right now, so far, I see a lot of people basically thinking about like the GPT chat interface here. If you try to ingrain that into a product where I'm trying to deliver a work product, or I'm trying to understand a solution to a particular deal or a technical challenge or whatever, there's a lot of context knowledge that is missing from that. And the agent ends up becoming sort of like a souped up clippy or like a search bar where you're like, all I'm doing is doing these rag things, like these retrievals, and I'm just trying to say, like, go get me the information and bring it back, but I got to piece it together into a holistic framework that makes this thing practical and valuable. Again, I go back to the space. Once you encode the domain knowledge, though, there's no reason to say like, hey, I have an agent that has the domain knowledge about how to organize things and how to go about work.

Joseph Miller [00:19:39]:
It has procedural knowledge, but now it needs the declarative knowledge. Now I'll go use the system to go grab me the things I need. Now I have all the ingredients and the procedural knowledge, or the domain knowledge is going to let me cook them into a thing that anybody actually wants to eat. I think that we're just getting to that space where it's like a mind shift. We're like, stop building short of doing the complete work, we'll try to do the whole thing and you'll see that these are the pieces that we're missing, but they're available. You can create, I mean, to your.

Matt Darrow [00:20:08]:
Point of like this massive unlock around these top down and bottoms up worlds coming together, the notion that LLMs are only going to take you so far, and they have their general limitations and what they can do, inability to reason, the inability to sort of interface the right way. Now, for me, going back to what you kicked off at the very beginning around some of these hybrid roles and the impact, I mean, this is where the sales, engineering work and the work that I had done for 15 years before, starting with Vivun, like really comes into focus. Because these are things that AI should be really, really great at doing. Not just answering technical questions, but the solution. Design, building, demonstration, tailoring, presentations. And to me, when that happens, and I look at how CROs are going to staff SE teams or how they're going to respond, I can see that the normal AESC ratio mode of operation is just completely dead. That's actually a little bit energizing for me, because I remember years ago starting Vivun, one of the first things that I ever published was a benchmark, a guide, and a way to think about KPI's, about running these teams in groups and basically saying your ratio is wrong. And all of the companies that we work with at Vivun, 99% of the time, they are wrong.

Matt Darrow [00:21:19]:
And they were wrong though, because they didn't have the visibility into what did it actually take to get technical wins, because that is 70% of the sales process that just normally lives like the dark side of the moon. But now we are in a place where we can imagine this work being done. And it's not just your ratio is going to be irrelevant because it was invisible and mismanaged. Now it's just going to be irrelevant because that mode of operation we fundamentally broken. So to your point about the type of work and the labor that we can disrupt. So to me, ses are going to have to shift their skillset pretty darn dramatically and be deployed in brand new ways and develop new skills. Or what will happen if they just try to retain old skills, is that those teams are just going to become smaller and a lot of the work will either might be done 60%, it might be pushed to the other teams that they're normally interfacing with. And on the flip side, to me, account executives are in this unique position to become much more independent and self sufficient eventually, and that'll allow CROs to allow them to take on higher quotas.

Matt Darrow [00:22:27]:
They're going to be able to attain more against those higher quotas. I think that's what a lot of the go to market folks and sales leaders out there are searching and craving for is more efficient costs, being able to retire quotas with a fewer number of people. And that will happen where AE team sizes are just going to shrink and get smaller. Whats your take on prospects and customers? Are they going to just directly engage with AI or do we still want the salesperson of the mix?

Joseph Miller [00:22:52]:
I think theres a lot of context that needs to be set to answer. Is it possible, I joked last year that I was like, well, were not going to see agents like one AI selling product to another AI? Now, I've imagined certain situations where actually this is not a wild idea. It's actually kind of an efficient idea. If you had like an AI marketplace of one agent knows its constraints, it knows its budgets, the other agent knows what the product can deliver and what its constraints are. And so there is like a pareto optimal solution here that you could imagine those two things getting. I dont think most cases are that way though, right? Like a lot of deals just have a lot of like speculative knowledge because we were working with incomplete knowledge about the product or about a strategy or about certain problems that might come up. And so I think that like this domain knowledge of the SE is like so critical to getting these technical deals done that you're not likely to be able to offload that to an AI like an LLM out of the box. I think codifying that sort of knowledge, though, then you can direct the AI to go kind of down to reason with that sort of domain knowledge so that it can actually deliver an entire solution.

Joseph Miller [00:23:58]:
The whole point of the domain knowledge, this is just to try to hammer this home, is a bottoms up model, can only tell you what has been. If I just take an LLM, if you take a GPT model or whatever, and you just throw all of your data in it and it becomes like some rag search or some weird thing like this, all it can do is tell you what has been the case, and a top down domain model is going to tell you what ought to be the case. So when a deal comes in and you say we ought to know their goal, we should understand the problems that they are facing to overcome that goal, we should understand the strategy and how they deal with the current state. Moving to say all of that stuff that is like how ought you process this knowledge so that you can actually deliver a good solution and understand it comprehensively? That can be codified and it alone can go reason over that sort of structure, over that procedural knowledge. But if you just start from the bottoms up, then you're like, well, you're hoping that all the data just reveals what ought to happen. Always happens. And that's not the case.

Matt Darrow [00:25:00]:
Even after being an SE for a long time. Se leader, se global leader. And I look at what becomes possible when you can codify Se knowledge, the work that they do, and the things that AI can bring to the table to bring this to bear. You can see it changing in a variety of different dimensions. The first one to be, and super obvious one for every sales engineer out there that's listening to this and thinking about this, is to just learn to leverage new tools. Because even in this space, I mean, it is the desert of technology. Like before Vivun, SE tools really only ever consisted of a couple fields in CRM. If you bought your sales ops guy a nice bottle of whiskey that they really wanted, or like a spreadsheet that you cooked up that was woefully inadequate, you needed fluency and new technologies to give you new leverage.

Matt Darrow [00:25:48]:
AI is just another one of those new leverage points to automate your own work and workflows in a very, very brand new way. When you do that, you can now logically extend to the sales engineer need to develop new skills that are going to make them wildly valuable beyond technical expertise in showing a demo, because AI can do a great job at understanding how things work. Explain it to your point, have codified knowledge around what ought to be done and how to do it, and also be in a phenomenal position to just do things like create creative assets on the fly. This notion of pre recorded demo content and managing orgs and environments, all this pre built stuff that is just going to be done completely on demand as it needs, completely on its own. So if SES are of relied upon just as the technical sneeze of the demo doers, that's going to be a huge problem for this profession. So to me, like skills around storytelling, asserting political influence in campaigns and deals, champion building, objection handling, these all come to mind as focal points, because that's going to allow them to use the opportunity of the AI to take a lot of the necessary yet tactical work off the plate to shift focus wherever. Now, a lot of that work around doing research, discovery, preparation, demonstration building, deck building, solution design. That's all that can be sort of removed so you can get your company to do opportunities so you can start to test what are ways that we can break into new markets and industries, leveraging techniques we've never done before.

Matt Darrow [00:27:23]:
How could we experiment with new use cases of our product to uncover new ways to add value to our customers? How can we take on more strategic dialogue within the around product direction or even changing, revamping the sales process? That's the work that se's really want to be doing in the first place. And if AI can help them get there faster, that's phenomenal. But it needs to be something that one, we want to run in that direction to embrace it and no, because of the power and this exponential curve of innovation and what becomes possible if we don't embrace it and try to change those skills and lean into new skills, like the inevitable is going to happen that you just described. Right. This is, this is a large disruption of the work that's going to be done. And if teams don't adapt and change skills and focus, those teams are just naturally going to sort of change in size and a lot of their responsibility sets are going to be sort of distributed to others who are going to get a whole hell lot more sort of independence and power along the way.

Joseph Miller [00:28:18]:
It seems like we're saying something controversial, but it's controversial in like a time horizon. It's kind of like the guillotine is falling and we're just debating how long is it going to take? And you're like, it's almost not interesting. Like in ten years it's obviously going to happen. So like, you got to start moving now, right?

Matt Darrow [00:28:36]:
I'll give you two things, two things folks need to remember from the session. Above all else, what are the big two takeaways from your chair?

Joseph Miller [00:28:44]:
I think the disruption to labor is like the biggest macro shift that's going to be happening over the next five to ten years. Especially with robotics. We didn't even talk about that, but we can go on for hours about that, too. And this idea that you need this domain knowledge, the domain knowledge is where the value is to unlock, really these LLMs to be able to do full agentic work, not just be like souped up clipping.

Matt Darrow [00:29:07]:
Yeah.