Making Better Decisions: Leaders in Data

In this podcast Ryan speaks to the CEO of Squark Judah Phillips. They discuss the importance of AI when looking at the future of Data Analytics. The conversation also touches on how to ensure that the data you are hunting for is the correct data for the question you are asking. 
 
Takeaways:
  • They discuss the evolution and application of language models (LLMs), particularly focusing on large language models (LLMs) like ChatGPT and their impact on AI and business strategies. 
  • It emphasizes how LLMs use sophisticated mathematics and proprietary approaches to generate contextually relevant language in real-time
  • The discussion also distinguishes between generative AI, exemplified by LLMs, and predictive AI, which focuses on statistical predictions and business analytics. It underscores the importance of aligning AI tools with specific business use cases to achieve tangible outcomes and return on investment (ROI). 
  • The narrative highlights challenges such as data privacy concerns and the need for strategic integration of AI into business processes.
  • Overall, the passage advocates for a nuanced approach to adopting AI technologies, emphasizing the necessity of clear business objectives and understanding the diverse capabilities of AI tools for effective implementation.

Quote of the Show:
  • “You have to ensure that the question you're trying to answer can be answered by the data that you have. ” - Judah Phillips

Links:
  • LinkedIn: https://www.linkedin.com/in/judahphillips/
  • Website: https://squarkai.com/

Ways to Tune In:

Creators & Guests

Host
Ryan Sullivan
Guest
Judah Phillips

What is Making Better Decisions: Leaders in Data?

Dive into the dynamic world where data meets decision-making! Hosted by Ryan Sullivan, a seasoned analyst, this podcast is your go-to resource for understanding how organizations harness the potential of data to drive strategic decisions. Join us every Wednesday as we navigate the ever-evolving landscape of data-driven transformations, making every episode a journey toward smarter decisions and better outcomes. Making Better Decisions is proudly sponsored by Canopy Analytic, helping companies make better decisions using data.

MBD - Judah Phillips
Intro: [00:00:00] This is Making Better Decisions. I'm your host, Ryan Sullivan. Decisions are where rubber meets the road for organizations. Each week, we'll be learning from people who are on the front lines of turning raw data into better outcomes for their organizations. This show is sponsored by Canopy Analytic, helping companies make better decisions using data.
Ryan: Welcome everybody to another episode of the Making Better Decisions podcast. Today's guest is a data focused leader and an experienced entrepreneur with a strong track record in AI and SaaS sectors. He is an author and business school instructor, has built a platform that brings machine learning and generative AI to the masses in a marketing automation solution.
Please welcome co founder and CEO of Squark, Judah Phillips. Welcome to the show.
Judah: Thanks. Awesome to be here, Ryan. I'm excited to be on your show. Thanks for having me.
Ryan: Thanks so much for [00:01:00] coming. All right. So I want to dive right in. I want to start with the same question that we ask everybody. What is one thing that you wish more people knew about using data to make better decisions? Yeah. Yeah.
Judah: like everybody. You know, when I think about it, there are a number of things. The first thing is that you have to ensure that the question you're trying to answer can be answered by the data that you have. Right. That's like a sort of 101 thing, but I think it's really important to understand, you know, my experience, and this even happened this morning.
Actually, um, I got a, uh, email from a CEO and, you know, he had some questions about forecasting and, you know, thought it would be really cool if we could, you know, do something with AI. And indeed, we can certainly [00:02:00] do something, but the data that he wants to use to predict doesn't exist, or it doesn't exist in the right format, or exist in a system that, you know, the data collection, the ETL hasn't been set up for, right?
So there's going to be some work to enable that. Um, and obviously analytics people, AI people, they're very busy and they work on scheduled roadmaps and prioritized projects, so. You know, interpolating that in, um, we have to be really judicious about it, right? So that's my first piece of guidance, which is, you know, know if the question you're asking can truly be answered by data.
Um, you know, another thing related to that is to, you know, understand the type of question that Your type of answer that would be required for the question. So this actually happened to me in a different scenario this morning, uh, just before we [00:03:00] got on the call where, uh, a team wanted to understand the impact of, um, current events on sales essentially, right?
And. You know, that makes sense, right? If you ask that at a high level, it's, you know, seems a reasonable request, but what really are they looking to answer, right? And it turns out when I started to dig deeper, they're looking to answer, you know, what will be the duration and days of the sales impact post the event. Um, and then, you know, related to, you know, knowing that you can answer it and understand that with the data and understanding, uh, you know, what is going to be the acceptable answer, what your stakeholder is looking for. There's also understanding whether the. And I guess this relates to my first part of the answer is whether the data you have already collected that you believe to be sufficient truly is sufficient.
Oftentimes there are gaps. In the second use case this [00:04:00] morning that came up, it's clear that in order to answer it, There's going to be some feature engineering required and some new data collected and some new ways to think about, uh, you know, the existing data that would help make that analysis relevant.
Um, I also think that you have to look at, like, resource availability and, you know, You know, infrastructure availability, um, in order to, you know, actually deliver. So I would probably concentrate on those, you know, things, right? Knowing that it can be answerable with the data. Knowing that you have the data.
Understanding exactly what you want to answer. Making sure you fill in any gaps based on that and then ensuring you have like the correct allocation of resources and like potentially, you know, systems or server capacity infrastructure to do that. Oh, and then one more thing that comes to mind, revenue impact.
Like this is a super important thing for me because, um, [00:05:00] well, I've been, you know, an analyst and, you know, had a lot of people asking me to, you know, if, if, or me asking them if they want fries with that, uh, I've, I've sort of, you know, over the years. That's sort of certainly gained in seniority. And you know, one of the things that companies live and die by is revenue, right?
And everything costs money to do. And so when you're looking at a company that let's say doesn't have maybe the free cashflow of a Google, right? Or, you know, has a CFO or has, you know, economics within PNLs and business units, you want to make sure that the projects that you take on and the work you do, is actually actionable and has a direct impact on the business.
And you know, there's two really ways to impact a business, right? It's either increasing revenue or reducing costs. So if given those constraints I put on, you can then also wrap that request [00:06:00] around, you know, an ROI or a financial impact pro forma, you know, that you think will result from getting the answer and being able to take action.
Well, you know, you'll get further, right? If it's a, if there's a big impact then, uh, from a revenue or cost savings perspective than if not, right? And I feel like that, that is often one thing that is devoid of, um, in, in many requests that I've gotten over the years. Um, even, even today, like the first one, like I understand, You know, the time savings for the executive, who's been asked by his boss to do it, right, but I don't really know what action they're going to take.
I, there's a few I could name. Um, whereas in that second request this morning, I know from the gentleman who requested it, that the, um, the direct revenue impact is there, or at least it's, you know, an influencer of a direct revenue impact, [00:07:00] so focus on those things and, you know, always in the business, tie it back to the To the dollars in the simplest way, right?
Increased revenue, reduced costs could argue boosted efficiency, but that's really reduced costs, right? So that will get the attention of the people in the, you know, in the leadership roles or in the C suite, right? It's not just analytics for analytics sake, or, you know, Oh, great. We have more data, um, to help us or confuse us.
Uh, it's that it's tied to, you know, Wow,
Ryan: I love that. I love. One of the things that I'm interested to learn from you on is
kind of the intersection of, of data and AI. You know, I think there's, there's kind of a, a couple of different populations out there. And so you have one population that's just kind of like. You guys do computers, [00:08:00] right? Then you have another population where it's kind of like, all right, well, you know, like data and AI are kind of one thing, kind of, you know, different.
There are just different tools in the same bag. And then you kind of have people that actually like really do that work and know that. Pulling in a rectangle of data that's already been cleaned and making some visualizations on it is very different from saying we have siloed, unclean data in many different locations, and we're going to pick a very specific machine learning algorithm or a neural net or like something very specific and generate a very specific result with it.
So, I think one of the things that would be very interesting to listeners who don't have a deep background in artificial intelligence is maybe if you can share a little bit about what are [00:09:00] some of the The most tangible limitations of AI. I think the possibilities of AI right now have really captured the public's imagination.
We've seen, you know, artificial intelligence tools come into, you know, public conversation. Everybody is talking about chat GPT and they don't have to understand anything about back propagation or neural nets or transformers or any of these things that actually make it work. So I think that there's, You know, sometimes this idea of going to shoot the moon or this'll be the first initiative that we do, or we're going to like make a lot of what you talked about with like, maybe we don't have the data for this.
Tell me a little bit in very simple terms, what are these artificial intelligence tools and what are some of the things that maybe shouldn't be our first step with them?
Judah: that's a really big question. So I'm at a loss on back there. So let me sort of [00:10:00] reflect on the question and kind of give some context and I'll, I'll start like broad level in a broad space, narrow it down. Um, first of all, what is AI, right? Like
Ryan: Hmm. Mm
Judah: I have been working with like. Statistics and machine learning, quote unquote AI, for, you know, could argue almost two decades now, um, starting when I was, um, at Monster Worldwide, and even before that, I mean, in statistics for, for three decades, but I really, in, in Monster, and, shh.
Big job site, you know, 16 years ago, we did churn models and, you know, predicted who was likely to, you know, no longer be a customer or we would do personalization and predict like what based on behaviors or demographics or firmographics, you know, what experience to render. And those were all machine learning based, you know, predictions.
Um, so we go back to the question, like what is AI? It means a lot of different things to [00:11:00] different people. Um, and, you know, You know, even vending like predictive AI, um, with Squark in the early days, there were, there was still a lot of confusion, you know, people, uh, you know, didn't have the data, they didn't have the time, they didn't understand, you know, the outcome.
So when you start to like group AI, Um, and there's just a, there's a lot here, and I'm sure, um, there's some folks who'll say I may, in this, answering this quickly, omit a few things, but largely there's, um, supervised machine learning, you know, semi supervised machine learning, and unsupervised machine learning.
So, you know, starting with the latter, like, unsupervised machine learning would be something like clustering. You know, where you're passing a parent child relationship, like the IDs on receipts and the SKUs on those receipts and you want to cluster the data together to see like set co occurrence, to see like, oh, when people buy [00:12:00] bread, do they buy milk and cheese more likely?
Do they buy bananas, right? And retailers have used that type of, um, Unsupervised machine learning, these task focused clustering algorithms to predict how to merchandise shelves for a long time. Like what, if you have three shelves, like an end cap and two sides, should you put, uh, based on the affinities between the products so that people will like pick them up, you know, like putting, you know, a sunscreen next to beach towels or, you know, beach toys.
Right. It's like, those things are not just necessarily serendipitous that are merchandise, so you have these, um. Unsupervised, you know, algorithms. And this would, it could include, you know, affinity analysis, market basket analysis, clustering, things like that, a whole bunch.
Ryan: yeah, absolutely.
Judah: the other side, you have like the supervised stuff and the supervised stuff is where you're, uh, learning from historic data and you're going to, you know, uh, do something arguably to that data, uh, to make it, uh, fit the needs of different [00:13:00] algorithms, which will then, uh, You know, do what they do and produce a result, right?
And so, um, and then you have semi supervised where it's, you know, humans are, are somewhat in control or in the loop of that stuff. But, um, and that gets a little nuanced. But if we focus on like the, really the two main categories, The supervised machine learning, um, is what you're seeing today that makes companies a lot of money.
And that tends to be around, right, predicting things, right? And when I mean predicting things, um, it could be predicting, as I said, who's gonna churn. So we learn from all your customer data in a CRM, or an EDW, or some data system, um, and each feature, or you know, or field in the data attribute, you know, is overloading here, but it's called a feature in AI.
It, you know, could be categorical, could be numeric, you know, could be a text string, right, a category, a time stamp, and you're looking maybe on a row by row [00:14:00] basis at your customers, you know, you have this, to your point about a rectangle of data, you have columns and rows, each row is a record. Each record has an ID, right, and then you know things about the customer, what they bought, when they bought it, who they bought it from, you know, behaviors and stuff, demographics, firmographics, and you know that this person, you know, churned, or didn't churn, or bought again, or didn't buy again, or bought these products, or didn't buy these products.
Well, you can learn from those relationships, and then you Infer, do inference, right, using a model to then score the probability of a customer churning, or a customer buying again, or a customer buying a particular product. Um, or, you know, what experience to serve, uh, or how much the lifetime value might be.
So, uh, or, or, you know, how much they might spend on a particular product, you know, kind of a request for quota or a pricing type of model. Um, all that's in the domain of supervised machine learning or supervised AI. And, uh, that stuff's used all the time to great [00:15:00] effect by companies. As a matter of fact, like 96 percent or something of the spend is really on like predictive AI.
And what's unique to me is that, you know, we started building that on Squark, you know, pre pandemic, right. And, um, I guess I can announce it on here. I want to know too, but we recently sold the company in May. Right? So while I still am the CEO of Squark, I do have another entity that owns Squark now. We'll announce it in August.
So always fun to have an exit. And it was precisely because of these types of use cases that Squark could do, that the customer had been doing for five years with Squark, that as the business grew and as, you know, the economics of other tools or alternatives to do what we did came into view, it was clear Squark had the potential.
The capability, the cost effectiveness, and you know, fit for purpose. And it was around use cases like this. Um, I'll go back to that in a second, meaning like predictive AI, um, say generative AI, right, is also supervised, right? [00:16:00] It's something that you're, you're learning from. There's more to it. There's, you know, GANs, generative adversarial networks, and stable diffusion models, and You know, the large language models that you see in, you know, RAG and LangChain and all that jazz.
But largely like instead of training on your customer data, these, uh, well, in addition to training on your customer data, uh, is these like LLMs train on these huge data sets, you know, like all of Wikipedia, like all of the internet, you know, the common crawl, um, you know, in any, Source of data that they could possibly crawl.
The New York Times, right? And as a result you see lawsuits and you see people questioning, you know, copyright and all that jazz around this stuff. And, and the result of that learning isn't to build a model that predicts, um,
Ryan: yeah,
Judah: level outcomes, but in this sense those models have, uh, are predicting what is the next word Right?
To say, right? It's, it's next level, [00:17:00] like it's like predicting the next word, um, in a very sophisticated way. And so what's happened is over the years with transformers and Google and, you know, humans have turned language into math. Like you've heard about embeddings, right? And being able to sort of tokenize language into like numbers.
And essentially these LLMs are going through, um, some pretty complex and sophisticated math. And, you know, proprietary approaches that aren't in open source that we know about in certain companies. And they're very quickly and intelligently being able to select what word comes next as it, you know, in real time, right?
As it pops up on your screen. And these models are not, they're not dumb in the sense that They're just taking the most frequent next word probability, right? There's a lot of, um, bespoke programming and trade secrets that allow, given a prompt, the language that's generated to be [00:18:00] constrained, right? To, um, be relevant and, you know, accurate.
And as you've seen, like, you know, chat GPT 2, if you ever use it, go to like 4. 0, like that capability has become stronger and stronger. And so in like very discreet and finite, well, you Discrete and finite places like programming and where like the space of language is well known, SQL, right? These things do really well.
Um, we talked about some of the negatives like the hallucinations after, but there are also some like capabilities within like generative AI like Retrieval Augmented Generation and LangChain that in the case of LangChain will allow you to um, say like this response is good or this response isn't good or this document is, you know, good or not good, you know, for the results to be refined, whereas in like RAG that, um, the, the generation is constrained to like the facts within the, um, architecture that you've, you've The [00:19:00] documents that you've uploaded.
So there's also like small language models, you know, and that's where maybe RAG, not that we don't know about what's going on in, you know, ChatGPT or, you know, Cohere or, you know, we don't know all the details of all these, um, some models are open sourced. Um, you know, in that sense, uh, you know, there's, we don't, well, we don't know There's like these small language models, right, that will constrain the language through RAG that allow you to generate like information and responses based on like your own text, right?
So it's kind of a subset of large language models, small language models. Um, so you've got like generative and predictive AI. And what's interesting is like the generative AI. Not very good for doing the predictive AI, and the predictive AI really doesn't do what the generative AI does. Like, in Squark, we use, like, transformers to extract features from text.
So when we, like, um, [00:20:00] when we see, like, a large corpus of text, we'll extract the top, uh, Noun phrases and meanings and words and we'll create new columns with a weight per row of like whatever we find in that column. So if we find like a bunch of text around like maroon, cerise, magenta, maroon, right? Um, a dark pink or something, we'll create a new column called red, for example, and give a weight of how much redness or those colors are expressed just as an idea.
You can do anything. Happiness, sadness, you know, sentiment analysis. Um, so, uh, we'll use those transformers, but like, that's to create better data for training. So we have better predictions. Um, whereas, you know, we're producing numbers and large language models are producing words. So there's a difference between the two.
And I think like the immediacy, Ryan, of like the chat GPT for interface and the fact that it's telling you things and it generates some really like great. If you prompt it well enough, like language that can help you be more [00:21:00] efficient or answer questions or refine your own work, summarize things. You know, that's really popped in people's mind is like, what is AI?
To answer my initial question, right? like I would encourage readers or listeners to understand is that, you know, the world of AI is a lot bigger than just generative AI. Though it's a mean everybody's, you know, using. It seems like extremely online people, everybody's using these large language models.
Um, but predictive AI is different. Um, and that's where most of the money is being made.
Ryan: You know, I think that,
so I do, I mean, my background is in mathematics. Like I'm more than capable of going out and doing artificial intelligence. I guess, you know, if you were to line up all of my, my math cohort, Right, I might be considered a sellout for doing more simple data and analytics and everything like that. One of the things that I actually think is really, really [00:22:00] interesting about your response is kind of exactly the same way that I feel, which is when we talk about AI kind of in the public imagination, AI seems to me to just equal, you know, ChatGPT. You know, when you're just talking to people, they're like, Oh, I, you know, as you mentioned, like, I know what AI is now. Like I type something in there and it like talks back to me. When I am consulting with a client, you know, so you get all sorts of different requests. Sometimes they're like, we know exactly what we want.
Sometimes it's like, Where we have a direction, sometimes we don't know it all, you know, sometimes it's like, well, we want to do AI. And it's like, oh, okay. You know, like, let's, let's figure out, you know, what are you trying to do? Like, what's the business problem.
Judah: Mm hmm.
Ryan: all of that stems essentially from my understanding of exactly what you mentioned, which is that AI machine learning, whatever umbrella term we want to put [00:23:00] over all of these things, there are very individual.
You know, group of tools. And there are cool new tools that get made, but exactly as you mentioned, right? Like if I'm trying to do a clustering analysis, like feeding that information in the chat GPT, like might give me something, but it's certainly not the best tool for the job. So. What I always encourage people to do is to kind of balance two things that you've mentioned.
Number one is to make sure that you're very specific on what is the business problem that I'm trying to solve. So like, if I'm trying to figure out how my customers are going to churn, okay, well, that's the objective. Let's pick the best tool to actually do that. And we focus on the one model. That will give us the answers to those questions, as opposed to saying, we're going to AI ify our business because that's cool right now.
Judah: Yeah.
Ryan: The other [00:24:00] big thing that you mentioned is having this perspective of return on investment. So I think there's a couple of different pieces that are incorporated in that. Like number one, just like, The same thing as like traditional analytics, like just like doling out heaps and heaps of cash to like embed different AI tools and all these different parts of the business.
That's, that's typically. You know, not going to be a high return on investment proposition, right? Like being very targeted, picking the right tools and focusing on problems one at a time, or at least, you know, some small number in parallel is typically the way that we'll see great returns with analytics, with AI, with, with anything, right?
Like if our focus gets too diffuse, it's just, you know, kind of, we lose track of what we're doing and it's hard to see results. The other big thing that I think that. Matters a lot when you're talking about return on investment with artificial [00:25:00] intelligence is kind of coming back to the, the basic economic theory, which is every business has a, you know, core competence area.
There are things that we're really good at and then there's everything else. And it's impossible, as you mentioned, because we have to optimize for finite resources, including, you know, the knowledge of the people that work for us. A lot of companies like, sure. Like you mentioned, like Google, I mean, they already have core competence here.
They have an army of some of the best and smartest software developers and, you know, AI and machine learning specialists in the world. You know, if you look at another company, one of the big things that I look at is like, sure. Could you efficiently build something like this given infinite time and resources?
Absolutely. But is that your core competence? Maybe, maybe not. It depends a lot [00:26:00] on the business. And so one of the things that I think is really exciting is, you know, looking at a tool like Squark or, you know, as we've seen the public's imagination, you know, the venture capital sphere and all of these things get really energized by AI.
What that means is that there are now going to be an ever increasing amount of tools that are made. So that we don't necessarily have to internally understand all of the nuance of all this machine learning and artificial intelligence and math and all this stuff. There are people who are out there that do understand that, that that is their core competence, whether it's a product or a consultant or whatever the case may be.
We can now take that and put that into our business to solve problems for us. And that's, that's the really exciting. Thing for me is like having, you know, all the nerds out there go out and make commercially viable solutions to lots of different problems. And then seeing how that all snaps [00:27:00] together to make new businesses.
Yeah.
Judah: really good points. Um, as I sort of reflect back on what you said, the reason why I think AI has talked about in the C suite a lot and like why people are making moves for, you know, correct, or, you know, perhaps misguided is that, you know, there's You have to have a strategy around this stuff, right?
You can't really bury your head in the sand or under a blanket and hope it goes away. And your business continues to, to thrive. I mean, the reality is all business have superior growth periods, you know, and then they start to weigh in a little bit, they could die. Um, so I think the bigger companies and You know, mid market, even smaller businesses, you know, want their superior growth period to be as long as possible, not to die.
Right. And this is purported by, you know, experts and companies that have been successful with it to like have an economic value, to have an economic impact. Um, But your point, like, you know, just [00:28:00] buying tools and doing things because it's, quote, AI or machine learning, you know, that might get you, you know, a slide in a board presentation or, you know, a line in an update or, you know, something that, you know, is short term benefit.
You know, and the risk is, does that turn into long term benefit? You have to kind of think through what you want to do and how it impacts the business. And so What sells in AI, like I'm one of probably the few people in the world that has sold giant enterprise deals to, you know, when our first customers was IBM, you know, Watson team bought the tool in the early days, um, Nielsen, you know, other companies I can't name, DHL, is that like, I didn't sell it to data scientists.
And I, and I sold it to business people who fund the data science teams on use cases. And that's what I would tell, like, I would answer to you and the [00:29:00] listeners is that you make money on AI on use cases and those use cases, you know, address a business problem that you're having. And those, depending on who is buying, you know, that problem, maybe the person's problem, the team's problem, the company's problem, the reason they're buying is, you know, they want to keep their job, get their bonus or make the company more money.
They could be altruistic or intrinsic, extrinsic benefits for the purchaser or the team. Um, but it all comes down to success is boiled on use cases. Like, what are you going to do? So like, um, when I talk to people about this stuff, it's like one of the first things I ask, like, well, what do you want to do with it?
And if it's like, well, you know, we're kicking the tires. Well, we want to do some AI. Like, um, we have a service to solve for that, right? But I, you know, I'm not on a call on the first call to like teach people what this stuff is, right? It's really to get to the heart of the matter, which is what do you want to do with it?
And how is it going to make you money or make your business better? You know, in the professional context. So, um, you need to have those use cases and, um, different tools [00:30:00] do different use cases. So like if you want to, you know, index all your textual documents, You know, around, um, uh, you know, marketing, you know, your collateral and stuff, and you'd want to like then generate.
Revisions or new content, you know, based on ideas you have that you could express in prompts, you know, small language models, right? Pretty good for that because, you know, you get the right one, it's not going to hallucinate that much, if at all, and it will be able to generate in context, you know, and your prompt solid information from your marketing collateral and themes and, you know, concepts within it.
If you go and like upload your collateral to, you know, Chat GPT 4. 0, and you don't have the right switch turned off. Like you've just given that data away for training for everybody else. So whatever competitive advantage you thought was expressed in your language has just been, you gave it away. Or if you switched off that learning, the shared learning, well, [00:31:00] it's going to generate responses that, you know, will.
Could be sufficient and, and might be really good, or they may contain factual errors or hallucinations because it's, it's, you know, using a wider corpus than just like a small language model. Now, if your goal was to like, you know, use marketing collateral to predict, you know, who's going to churn, I mean, there's, there's some ways you could do that, right?
But, um, that's not really like a very straightforward or kind of wise use case for predictive analytics at face value. So you'd want to like elaborate a bit more on that. And then like from a vendor perspective, it's like, well, you know, how do you buy software? People don't buy software, don't know how they buy it.
So if they don't know how to buy it, they're not buyers. You know, and do you have a budget? Do you sign, right? Um, things like that. And then you screen them out. And, um, the buyers tend to be those people who have that clear use case and that line of sight to sources and the data, especially to how they deliver it.
Like when I'm selling or have sold this stuff, it's like, well, what's Squark? Okay. It's kind of an unusually named thing. [00:32:00] So, you know, you explain what it is, predictive analytics. Then like, what's the use case? Oh, it's customer data. You know, we're doing churn, for example. And then they want to know, well, what data do I use?
Well, let's use our CRM data, right? Uh, and what, um, well, who's going to do it? Well, you know, we're going to teach your team, or we're going to put in, you know, a resource for a month to teach your team so you can learn to fish. And, you know, it gets done. And so it's like, You have to tie to those use cases, um, and I think, like, that's probably another reason why, like, people in AI at the current time are compensated pretty well, um, because either they know how to elicit what needs to get done in the business context, and then like apply the tools and the technique to it, either like as leaders or as, you know, individual contributors that then can generate those results.
To your point about tools, there's like so many different tools. It's almost a state of perfect competition, but generally, you know, the tools will, um, [00:33:00] do prediction, right? They'll, um, they can potentially do time series forecasting, like, that's a little bit different, but we do some time series forecasting, like decomposing cycle and trend and seasonality and being able to predict that.
Not just one number, like in a regression, but multiple numbers. Um, and then there's even things like causal, you know, AI. That didn't even talk about on the first answer, which is like a more emergent branch of AI that's focused on like cause and effect relationships. And, you know, it's about a lot of the stuff I described, like looks at correlations and patterns in the data, whereas like causal AI is looking more at the mechanisms around like what are those, you know, um, Patterns and correlations to be able to, um, you know, know what happened, like counterfactual stuff.
Like if I didn't do that marketing campaign, what would happen, right? You can understand incremental lift. So, um, different tools will do different things and you have to have a very clear line of sight to like the use case that you want to do and its business impact and the data and who's going to do it in order to be successful with it.
And like, I mean, I've been doing this for a long time, [00:34:00] so it's super easy for me at this point in my professional career to like, and that's what I do, I guess, fairly well above average, is like, go in there and I can understand the business and the technology and how AI could impact it. And then I can very, um, quickly and accurately, you know, identify what could be done given the current state, what could be done given the future state or future state desire.
Where are the gaps in that? How to close those gaps, and how to sort of sequence the capital such that you get a result on the things that are low business effort, high business impact, as you ladder up your maturity to get to the higher bus, you know, potentially higher business impact, higher, you know, more difficult.
You know, problems later on, higher level of effort. So I think like, you're right when you think about
Ryan: I love, I, I, I love the focus on kind of the business first and that, you know, this is something that I preach a lot, whether it's, it's [00:35:00] analytics, data, infrastructure, engineering, you know, artificial intelligence, any of this stuff is really like you are running a business. That business's purpose is usually clearly defined, whether actually, even if it's like a nonprofit, like whatever type of organization it's in, your goals ideally are fairly well defined.
Judah: Yeah,
Ryan: Those are the goals, right? Like some department doesn't like make up their own goals, at least, you know, hopefully, so for me, it's all about driving towards those and having a clear picture of, okay, we're going to try to do X. What are the tools that can help us achieve that goal? And you're right. Like, I think that there's a lot of really incredible stuff.
There's new stuff every day. The stuff that's out there gets better every day. Um, so it's, it's really a time of, you know, exciting opportunities. But I don't think that, you know, as you mentioned, we should just recklessly abandon the idea that, you know, the business has a [00:36:00] purpose and our job is to try and help it reach it.
Um,
Judah: ultimately like the highest order, not to interrupt Brian, but that's ultimately the highest order, right? Helping the business, you know, in my career at various levels, you know, some people like, you know, have some consternation about working with sales teams, you know, sales guys, you know, selling people, you know, and like, listen, the sales guy is the only reason you have a job, you know, then the executive leadership, they may not know like, you know, a lot about your fat tailed autoregressive model or, you know, how to interpret a classifier or whatever.
But the reality is like. They don't need to, um, you know, the goals will come from the board. They'll be translated by the CEO to his line of business leaders. You know, they'll OKR, they'll do whatever they do, you know, whatever, you know, approach to, um, translating that into tactics and they'll be reflected down.
And it's in those things that you can sort of those, those artifacts of management and strategy that you can boil up what you need to do with. You know, AI or boil it down and what's what's interesting is [00:37:00] like this is just the facts of the matter and this is probably somewhat of a Controversial statement is like well, this stuff is meant to be human in control It's not controversial, but it may be different people say human in the loop.
I think human in control And um, there's this idea that like you need to have a phd, you know, or you need to have you know some really sophisticated data scientists on staff Um, and depending on the, you know, industry and what you're trying to do and the available tool sets, that may very well be true.
Um, and certainly if you're building data science software, you need data scientists, right? Certainly if you're like, you know, modeling pharmacological effects or, you know, Um, things that were, you know, human life is on the line. Um, you know, maybe it makes sense to have more humans in control and more bespoke coding to enable that stuff.
But when you're looking at like industries, like things like marketing or sales or, you know, merchandising or retail, [00:38:00] you know, um, You know, a lot of money is wasted, right? Just regardless, you hope fortunately, generally like people's lives aren't on the line. And the idea of this stuff of giving you more, um, just value from reducing that waste, you know, even if you reduce it by 10 percent can be pretty significant or in pre group conversion by, you know, 10%.
So, My take on it is that a lot of it can be automated, and what we've seen, um, with Squark over the years as we sort of built this out from an MVP to do binary classification, to do multinomial, to do regression, to do time series, to do explainable AI, and, you know, different things. You know, automated feature engineering, and automated results summary, and all sorts of stuff that like humans would normally do.
But the net result is it goes faster, cheaper, like less expensive, right? Reduced cost. And the results are as good if not better than humans. And um, that's [00:39:00] controversial because you know, I'll go in and try to sell, that's why I don't sell the tool to data scientists. You know, frankly, like, and I told this to my friend, Jeremy Atchen over there at a competitor, um, years ago, like data scientists can only say no.
They generally have no budget. Maybe the VPs do, the SVPs do. They're, you know, but they're still getting that from the business and they're still delivering against a business roadmap. They're not just sort of pontificating, you know, statistical fantasy in their groups. Maybe some are, but, um, they're trying to focus on that.
And so, um, you know, data science teams, I just can't say no, like they, they think they can do it better themselves. You know, like, they went to school to do it, and also they've been told they're like, sexiest job of the 21st century, you know, they're all very usually intelligent, capable people who want to get involved in things and apply their intelligence and capabilities.
And so, you know, tell them, upload the data and click a button, and you're going to get results that are as good as you. Doctor, [00:40:00] you know, can do, um, is super controversial to some. And what I found out is that like the best data science leaders get it and automate a lot and the best data scientists know when to use this.
It's kind of like asking a coder today. Like do you just code from your head or books? No. Like, they use copilots. They go on ChatGPT4, right? I actually heard an anecdote about a friend of mine who's a consultant in the space, and he went to Big Bank, where they've banned ChatGPT4, and he gave a lecture.
And like, two guys came up to him after, and they were pretty senior, and they were like, Uh, well we bring in our own laptop and our phones and we use chat GPT 4 on the sly. And he's like, I'm up for a promotion as a result of it. Right? And so like, how I've, like, countered objections from data scientists.
It's been like, either to acknowledge that, you know, they have some good point, and you know, it's unrelated to the discussion, and you know, sort of move that piece over there, we'll talk [00:41:00] about it later. Um, or, uh, I will explain to them that if we're right, well, we've just freed a lot of your time and you can focus on some higher order things and things that you really want to 80 percent of your work.
And if we're wrong, which we haven't been, Well, then you're right. Only you can do it, right? You're the sort of the golden child in the white tower. And, you know, everybody should knock on your door and, you know, hope that you answer and, and. You know, like Rapunzel, let down your golden hair. But the reality is, um, I've seen it over and over again where this stuff, like the automated tools win, not in all, like in, in pretty much all the cases I've seen, but I can understand in some industries that have people I've talked to where, you know, there's a risk for doing that because of the human lives in the equation, um, in selling a company, right?
Like people, you know, people want to kick the tires and, um, you know, There was an entity that we didn't end up selling to that, [00:42:00] um, you know, we'd spent some time kicking the tires and they sort of got their, you know, PhD level data scientists on the phone. And she eventually admitted that like the models were just as good, if not better than the stuff we do by hand, you know, and, um, It didn't really go anywhere after that, you know, just kind of like petered out.
Um, because I think that was hard for the business to reconcile. They probably were like, well, why do we have this giant team doing this? If this is the case. Meanwhile, um, one of our customers, you know, came in and ended up buying us. So, um, I think you can automate a lot of it. I think some of it, you know, will be automatable and some of it you probably want to have more humans in control on it when human lives are at play.
Like, I wouldn't want to automate, you know, ballistic missile defense detection without, like, human overrides, you know? You got a war game scenario with Matthew Broderick, you know, thank you for playing some tic tac toe, you know? How about a nice game of tic tac toe? So, um, yeah, but the reality [00:43:00] is, like, this stuff is just getting more and more automated.
And one thing we didn't talk about is like, agentic AI. And so when you talk about like, well, what do I select and what do I use? Um, and then how does it, you know, relate to something else? Historically, there's been like this idea of like decision intelligence. And this is not a new concept. It's like eight, nine years old.
You see it as an, uh, you know, a emerging trend in like the Gartner, uh, slopes of enlightenment or whatever. But, um, Decision intelligence was this idea that you could use AI to make better decisions, whether human or automated, and that by combining the results of models, you can make even better decisions, like multi link models.
Like off the door, say it was Squark, like, hey, here's the churn customers, here's, you know, their, um, and that's their likelihood of churn. Here's, you know, how much if they didn't churn from a lifetime value, or, you know, next 48 months perspective, or however long your business cycle is, here's what they would be worth.
And so like, now I've, you know, got a model that predicted [00:44:00] churn. Now I've got a model for those high probability churners of the loss, the risk, the financial risk. And then I might build a model that predicts, well, what marketing offer do I make out of 10 marketing offers to reduce this churn or n number of marketing offers?
And the model will say, hey, you know, For, you know, Ryan sent off for five, followed by offer three, followed by offer two. But Judas sent off for nine, followed by offer one, followed by offer 10, right? And so I get like an individualized sequence and it's like, so going from churn probability to financial risk and for those financially risky potential churners, here's the offer to make.
That would be like a multi linked decision intelligence model. Now, when you get into like agentic AI and this stuff is new. You know, it's buzzy, people talk about it, but that's, I think, the way it's, it's going. You have these capabilities that may be somewhat autonomous that can break a complex task, like segment it into discrete activities that it then needs to go and execute, right?
Like, for example, if you had an agent to order pizza, I'm just joking, like, you need to, you know, know what pizza [00:45:00] place to, you know, log into and how to log in and how to, you know, need to know an order, how to select it, you know, you need to know how to, you know, can do that. Buy the pizza, right? And then maybe you want to do an update as to when it's going to be delivered and confirm the delivery, you know, these are all separate actions.
And in theory, I'm joking a little bit, but an AI pizza agent would be able to deconstruct all those steps and then execute them. So when you start to think about that, like decision intelligence flow, which, yeah, you can script it right now, but I would probably do manually. Yeah, absolutely. There's this idea that agents are going to be able to do that work.
Um, and where you see agent, agentic AI is in like, um, yes, we see a lot of it, but where you see a discussion about agents is in like customer service, you know, so companies like Maven AGI out here in Boston, who are building an agent that will allow them to respond to tickets. Right? And, you know, potentially go beyond [00:46:00] just saying, Hey, well, you need to, um, you know, fix your, um, sequence in HubSpot, right?
The agent will then tell you, Hey, you need to fix your sequence in HubSpot and here's how you need to fix it. And then purportedly go in and like actually execute calls that are like clicking on the interface and fixing, right, the sequence. And so I think that's where it's going. And like the more complex business workflows will be.
There will be a level of like agents plugged into pieces of the business process that will be one or more models that work together with that human in control to supervise it. So like the nature of work is going to change for people. But again, like to build these, what do you need? You need data scientists, right?
And to like get these things implemented, you need strategists and people who can understand them in the context of the business and need willing executives who are like creating strategies at the And allocating capital and the sequence [00:47:00] of that capital to deliver against those, um, those projects that have that high MPV or high internal rate of return.
So you end up, you know, making money in the long run from the investment. And I tend to think that's where it's going. So when I start to hear people like, you know, Oh, AI is, you know, AI bubbles about to burst. It's like, I've been in this space so long and like, it's not about that. And I hear like, you know, well, I can't do this.
That's like, yes. Or, you know, couldn't, you know, it may in the future. It will. And when I only people can do it, I start to just think like, it's a little bit, it's a little bit of a dated thought. So the way I've always like to sell a company, you can sell what you did, but you're really like selling what you will do in the future, right?
And so similarly, I think as an executive and a leader or a person working in this, there's what you're doing today and what you're going to be doing in a year, two years and five years. And I think like. The best leaders in this space are not making it, they're not making decisions based on, you know, what, what [00:48:00] needs to get done only in the next three to six months, but they're looking at like two, three, five year roadmaps, um, and then how to, you know, how to benefit from this technology.
And that's going to be rethinking resources, rethinking, you know, capital and infrastructure and rethinking like business processes in totally. So
Ryan: yeah.
Judah: a, you know, we're still in the early stages, Ryan, you know, like conversations like this, people like you, um, exposing people in the industry to listeners who, you know, can then learn from them.
And, um, people like me are like hands on the hands on the ground on, you know, you're on the train track every day, trying to move it forward. You know, all of us are important in making it real, uh, realer and, and creating these, uh, Really cool capabilities into the future that I ultimately think like I'm not like a FUD AI guy like oh You don't create Skynet, you know Terminator is gonna come out like world's gonna get destroyed.
I I think like a little differently I think it's gonna change the nature of work. It's gonna make people Have [00:49:00] more free time and it's going to help our societies, you know get better whether it's You know, business stuff that we do or as the government gets involved, you know, um, through, you know, better climate policies, through better energy, you know, policies, uh, better ways to, um, reduce, um, you know, crime or to prevent, um, you know, societies from eroding through factual, misfactual misinformation.
I think like AI, sure, you know, could be used negatively, but I think, um, It won't be. I think, um, we're going to get what we deserve as humans, and I think what we deserve is a great world and great culture, so we're gonna, I think the people in it are aware of that. And they hear all the FUD and uncertainty and sort of naysayers and Luddites talking about it.
And I think people get that. And, you know, we're all humans working to make it better and, and create these new architectures. And the idea isn't to like, make the world [00:50:00] worse. It's to make it better. And I think that's ultimately what's going to happen. That being said. Right? We live in a world of capitalism, not altruism.
So these things are going to occur based on, you know, markets and the, the players in the markets. And so that's where like, you know, citizens and governments and, um, competitors and, you know, workers and employees and consultants and leaders in those, those industries and those markets and those companies, um, have a, have to learn this stuff and have to participate in making it better.
Um, uh,
Ryan: it's a, it's a super exciting future. I think that, You know, there's, there's definitely roadblocks and fears, but in general, I think it's going to be really, really interesting to see what happens. Um, Judah, I can't thank you enough for sharing as much of your time and experiences you have.
Like it, [00:51:00] you know, just from getting to talk to you, um, It's extremely apparent that, you know, like you've been in this space and thinking about it and doing it for a long time. And you were able to kind of distill all of that information for all of the listeners and what I found to be an extremely digestible way.
Yeah. I really can't thank you enough for taking your time to share that with everybody. If, if anybody loves what you had to say today, what might be a great way to reach out and connect with you?
Judah: that's it. Well, first of all, thank you for the kind words, you know, like, I really love what I've been doing with, um, analytics and data and AI for like my whole career in almost three decades now. So it's great to have the opportunity. Like I count my blessings every day to be able to talk to intelligent people like yourselves, participate on, in, you know, forums like this and just like, actually get to, you know, do this stuff every day.
So I really love doing it. So thank you for those nice words. And I'm glad that it comes out when I talk to people about it. And I've [00:52:00] also been trying to simplify it and make this stuff like ubiquitous. Like I really believe that like people should have AI, not just governments and companies. So a lot of my thrust of work has been doing that.
So if you want to reach out to me, like, um, the, probably the easiest and best way is LinkedIn. Like, I'm on LinkedIn all the time. I've been there for over 20 years. Um, and so I post, um, ideas and thoughts and repost colleagues, and I do this humans of AI thing on LinkedIn. So every month or so, or when I have some free time, um, I'll just pick a random person I know in the industry.
I'll write about them and say, Hey, this is a human in ai, right? Uh, I'm also on X, um, formerly known as Twitter. Um, I've been on there a long time. So I'm at Judah at JU UDAH. But that's not that, um, you'll see me retweeting a lot of, um, pictures of space, basically. And every so often, some professional stuff, but you can always reach out to me there.
LinkedIn's the best. [00:53:00] And I'm also at Squark, you know, so judahatsquarkai. com if you want to reach out that way. And if there's any people wanting to kick the tires or potential buyers of some predictive technology that, and something I said was interesting, I'd be happy to to talk with you about that technology and, and, you know, Anything else, really.
So Reach out. Always willing to lend a helping hand and really like to collaborate with people, um, all over the world. So feel free to do that. And I want to thank you too, Ryan, for having me on, letting me pontificate, wax poetic on these questions. And, uh, you know, really appreciate, uh, your time and your leadership in the industry and amplifying, um, the voices within it, and helping your learners, your readers, and your listeners, um, educate themselves.
Uh, you're part of the solution, right? And building a better future. And so thank you for, for your efforts too.
Ryan: Thank you. Wow. That is very kind. I [00:54:00] appreciate it. Thank you again for that. I also want to make sure to thank the audience. If you learn something, if you like something, please make sure to like, subscribe, share, tell a friend, give us a review, hopefully a good one. All of that stuff is a huge help in what Judah mentioned, which is kind of getting this information out to everybody to, to hopefully learn a little bit and, you know, make their life easier and make their business a bit better.
So I'm Judah, thank you again so much for the time. And this has been another exciting episode of the Making Better Decisions Podcast. Thanks for listening.
Outro: That's a wrap for today's episode of making better decisions for show notes and more visit, making better decisions dot live a special thank you to our sponsor canopy analytic canopy. Analytic is a boutique consultancy focused on business intelligence and data engineering. They help companies make better decisions using data for more information, visit canopy analytic.
com. There's a better way. Let's find it together and make better decisions. Thank [00:55:00] you so much for listening. We'll catch you next week.