What is Machine Learning? What is AI? And how does it impact contact center operations?
In this series we will discuss Contact Center industry trends and best practices, as well as sharing success stories and pain points with some of the most innovative professionals in the industry. Join us as we learn and grow together in order to provide world class customer service to each and every one of our clients.
Machine Learning and Artificial Intelligence - What is it and how are contact centers using AI/ML?
[00:00:00] Dave Hoekstra: Welcome to Working Smarter, presented by Calabrio where we discuss contact center industry trends and best practices, as well as sharing success stories and pain points with some of the most innovative professionals in the industry. We are very glad you're joining us to learn and grow together in order to provide world-class customer service to each and every one of our customers.
[00:00:18] Dave Hoekstra: My name is Dave Hoekstra product evangelist for Calabrio and my guest today is Kyle Smaagard . Kyle is the technical program manager for machine learning and rapid prototyping here at Calabrio. And we are super excited to have Kyle join us because AI and ML the are super buzzwordy topics that show up a lot in what we do today.
[00:00:42] Dave Hoekstra: What I wanted to do is bring Kyle on and have him demystify some of the things about artificial intelligence and machine learning that kind of permeate our industry and the software solutions that we have. So I'm very excited. Kyle, thanks for joining us. And what my first question to you is really more of a statement, but tell us a little bit about who you are, where, how did you get into this business?
[00:01:03] Dave Hoekstra: So, you know, what where was that moment of inspiration that said, you know, Machine learning is where I want to be. Where did that start?
[00:01:11] Kyle Smaagard: Yeah. Thanks a lot. Thanks a lot for inviting me, Dave. This is really exciting. So it kind of started for me in 2016 actually, I was attending a conference.
[00:01:21] Kyle Smaagard: It was one of those Un-Conferences where anybody can present on any topic they wanted, and there was an individual there who was demonstrating how he had written a program to teach a little stick figure how to walk using machine learning reinforcement learning. And he walked through exactly how he did that.
[00:01:39] Kyle Smaagard: And I looked at that and I was like, this is really interesting. And oh, by the way, it looks simple enough that I could probably tackle it and do something similar. So that's really kind of where I started diving into, well, what is machine learning and how can we use it? And that's really, you know, my, my journey from there was being self-taught trying to build up the grassroots.
[00:02:01] Kyle Smaagard: Interest in the topic at my previous company and with the goal of, hey, we need to have a company or a team at this company that goes around using the data that we have, using these machine learning techniques to generate insights from all of the data that we had. And in the meantime, Calabrio reached out and said, Hey, we built a team and their goal is to go around the company and figure out how to use the data.
[00:02:23] Kyle Smaagard: We have to generate insights, are you interested? And I was like, well, you got me there.
[00:02:28] Dave Hoekstra: Right. A as a matter of fact, I am. And that's great. And so. You know, you kind of mentioned let's kind of dig into right in there. What you talked about. You said it was the term you used, it was machine learning.
[00:02:40] Dave Hoekstra: But actually what, sorry? The stick figure. What was the other term that was used?
[00:02:44] Kyle Smaagard: It was reinforcement learning.
[00:02:45] Dave Hoekstra: Reinforcement learning. So let's just start there. Let's start with machine learning. What is machine learning? What the heck are we talking about here?
[00:02:52] Kyle Smaagard: Yeah. So the way I like to think about buzzwords, AI machine learning, deep learning, reinforcement learning I kind of equated to things that we learned in grade school, in, in high school and college.
[00:03:05] Kyle Smaagard: So you can think of AI as sort of a broad field of study like biology. And if that is the case, there's a lot of things that fall under AI. Just like there's a lot of things that fall under the field of biology. You can look at, you know, existing animals. You can look at historical animals.
[00:03:24] Kyle Smaagard: But when you start narrowing down that field of study, let's say you start fixating on autopsies, you wanna be a mortician, right? That's more along the lines of well, machine learning. It. Tool or a technique that is used to learn more in the area of biology. So machine learning is this very specific tool or technique that we use to advance the field of AI, and you can go in it even deeper.
[00:03:50] Kyle Smaagard: I said reinforcement learning, which is a subset of machine learning, is a type of technology that you can use or a way to train a machine learning model. Just like when you're doing an autopsy, maybe you have a particular scalpel that you might use to do one particular cut versus a different particular cut depending on what you're trying to learn.
[00:04:11] Kyle Smaagard: And that's kind of the way I think about it.
[00:04:13] Dave Hoekstra: I did not have autopsy on my Working Smarter Bingo card today. But that's actually a really great way of looking at it. Right. And so how do you teach a machine to learn? How do you do that? Right. I mean, in my mind I can kind of conceptualize the idea, but it really getting down into the nitty gritty, how do you write code
[00:04:34] Dave Hoekstra: that learns , for lack of a better question.
[00:04:38] Kyle Smaagard: So, so this is the interesting innovation when you when people think today about well writing code to, to have a smart system, they think, well, I wanna teach this machine everything. I wanna say if this, then that. Now we're talking really like expert systems, which were really around in the eighties or nineties where you had to explicitly call out everything that a machine
[00:05:00] Kyle Smaagard: might need to know. So you might think of these as, you know, really advanced help files. Essentially what we learned and what, well, what was around back then, but the computing power wasn't there, is, hey, there's a different way of learning, rather than explicitly saying, here's everything that you know, we're going to teach a machine the same way we teach a person.
[00:05:18] Kyle Smaagard: So Dave, I wanna ask you, how did you learn what a dog is?
[00:05:25] Dave Hoekstra: Well, I was a pretty smart kid, , right? And you know, no more, more than likely. We had a dog. My first pet's name was Buffy. And no, that's not gonna show up on anybody's social engineering password guessing but more than likely, I didn't really know that a dog was a dog. I was just, there was this animal that was part of our family that my parents called a dog, right?
[00:05:51] Dave Hoekstra: And it wasn't the dog's specific name. It, I probably in my mind had to learn to typify what the name, the proper name of the dog was versus the category of what a dog was. And you know, now that you've asking me this question and I'm examining it I can see why this is so hard. Is machine learning
[00:06:09] Kyle Smaagard: Yeah. So it's, you know, you think about it and think about how a child learns. Like, what is a dog? Right? We don't tell them that a dog has four legs. A dog has fur, A dog has ears that look like this, because a different breed might have different fur, a different breed might have a different set of ears.
[00:06:26] Kyle Smaagard: So how does a kid understand what a dog is? Well, the answer is actually really simple. They see lots of examples of what a dog. They see 'em all over the place. They see 'em in pictures, they see 'em in movies, they see 'em on tv, and they see all of these variations of what a dog is.
[00:06:43] Kyle Smaagard: And you don't have to tell a kid that a dog has four legs, A dog has fur. You can look at a five-year-old and give them a picture of a cat and a dog and say, well, which is it? A cat or a dog? And they'll be able to pretty reliably point out, which is a cat and which is a dog, but not really be able to describe.
[00:07:00] Kyle Smaagard: That's the answer. There's a lot of intuitive things that they've learned, like, well, they both have four legs. They both have fur, right? But there's size differences. A dog is usually larger than a cat, but not always. The face structure is different. How they sound if you have that information is a little different.
[00:07:14] Kyle Smaagard: So the way that they learned was just seeing all of those examples over and over again, and associating that example with dogs. Somebody said, this is a dog. Somebody said, this is a cat. Here's a movie about a dog. Here's a movie about a cat. Or here's a movie about cats and dogs. And you can clearly see the difference.
[00:07:30] Kyle Smaagard: So you learn just intuitively that there are differences, and maybe I can't explain them as a five-year-old, but I can, my brain has made those connections and associate what is a cat with what is a dog. And that's really how we end up teaching. I mean, that, that's really what deep learning is we give the computer the ability to make its own connections.
[00:07:51] Kyle Smaagard: And say, here are a thousand, a million, a hundred million examples of the differences between a cat and a dog. And the machine learning model builds up a baseline understanding of, okay, I'm looking at a picture. And there's lines. Those lines make up boxes or curves or circles. Those circles when combined make up.
[00:08:08] Kyle Smaagard: Facial features like perhaps ears, nose, eyes. And as it builds up that understanding of the world, it then associates that end result of, here's my overall picture. It's got four legs, it's got ears that look like this. It associates that with dogs or cats. And so in the future, you can give it another example that you don't know and say, well, what do you think this is?
[00:08:30] Kyle Smaagard: And the machine can say, well, based on my understanding of the millions of examples that I've seen of cats and dogs, I think this is a dog just like a human would.
[00:08:39] Dave Hoekstra: But doesn't that it have to start with, you know, should we be calling it machine teaching and not machine learning? Doesn't it have to start with a human being saying like, at least one example of this is a dog, and this is a cat.
[00:08:52] Kyle Smaagard: Yeah, that's true. You do need to tell it what the world is. And that's kind of some of the innovation nowadays is what we call few shot learners or one shot learners, and that's the ability to take a pre-built model that has perhaps an understanding of pictures and say, Hey, I've got this program that has a general understanding of what clouds are, and I can give it five examples of, let's say cumul nimbus clouds versus stratus clouds.
[00:09:18] Kyle Smaagard: and it will know. All right, well I already know what makes up a cloud and this is just going to give me the differentiation between the two. So in a nutshell, yes, in most circumstances we do need to give it the right answer. And that's what we actually refer to as supervised learning. But we can do that with very few examples nowadays.
[00:09:35] Kyle Smaagard: Now there is a category of machine learning of what we call unsupervised learning. And this is the ability to say, well, we don't know what the right answer is, but we want you to learn what that possibly could be. Some examples of this are what we refer to as clustering algorithms.
[00:09:52] Kyle Smaagard: So I might give you, in our cat and dog example, I might not actually tell you what cats and dogs are, but I might give you all of the data and say, okay, separate this data into two separate things. Now, depending on the data set, this is where unsupervised learning is tricky, is you might get what you want, you might get cats and dogs, or you might get you know, depending on your pictures, you might get brown animals and black animals, right?
[00:10:16] Kyle Smaagard: this is where the trickiness is unsupervised because you're not telling it explicitly what you're looking for. You can get some surprising results.
[00:10:23] Dave Hoekstra: Has unsupervised learning led us down a path that we never could have expected where we figured something out. Do you have any examples of that?
[00:10:31] Dave Hoekstra: Because I could imagine a you know, an unsupervised learning coming back and maybe. And possibly the dog and cat analogy doesn't work. But you know, if you imagine like the analytics of football stats or baseball stats, right? It surf surfaces statistics that we never even considered before, right?
[00:10:48] Dave Hoekstra: Have we seen an example of
[00:10:49] Kyle Smaagard: You know, I'm trying to think of a good example. I don't have a good example, especially a Calabrio specific one. I'm sure they're out there. I mean, if you think about it, this is what, you know, business intelligence is all about. Right? Tell me what I don't know.
[00:11:00] Kyle Smaagard: And I'm sure there's lots of surprises in there. I can't think of an explicit one off the top of my head though. That's all right.
[00:11:06] Dave Hoekstra: Sometimes we get questions that we we're not prepared for. And that's the fun of doing the podcast here, and. I what I'm really curious about when it comes to machine learning, cuz.
[00:11:18] Dave Hoekstra: You know, I think for most people actually I would imagine 95% of the world doesn't really have much of a concept of what machine learning is to me. The example I always tell people is the Google, are you a robot check where it shows you and says, pick click every square that has a boat in it.
[00:11:40] Dave Hoekstra: Are, is that really machine learning? Are we, or are we teaching. Google to recognize a boat.
[00:11:46] Kyle Smaagard: So, so Google was pretty brilliant actually back, and this actually has its roots back to GOOG-411 do you remember that? I do. And I love
[00:11:54] Dave Hoekstra: GOOG-411 as an example of how we're all just suckers to, to teach Google's algorithms
[00:12:01] Kyle Smaagard: Yeah. So, yeah, you're absolutely right. I mean, GOOG-411 was not just a useful service, but. One of Google's first forays into how can we get people to give us free information? And in that case it was voice recordings so that they could train their models. But now we're talking about and I'm blanking on the the terminology for the the AI check.
[00:12:19] Kyle Smaagard: CAPTCHA that's CAPTCHA . So CAPTCHA started out as we're just going to, so as an anti-spam protection, right? People would just. Create scripts that automatically create accounts like Gmail accounts, and then just start spanning with people. So CAPTCHA came out with, okay, well let's do, let's display an image because we know computers can't recognize images.
[00:12:37] Kyle Smaagard: Right. And we'll put a bunch of of letters in there and a human will be able to pick it out well. That works. That worked for a while, right? But now it's very easy for us to build a machine learning model that can pick out letters or numbers from images, which is why captures have gotten really hard and sometimes, like, I can't even figure out.
[00:12:53] Kyle Smaagard: And they came out with a, well, sounded out for me. Was the innovation there. But Google, again, looked at this and said, okay, CAPTCHA is great. CAPTCHA solves a problem, but it's not good enough. The spammers are advancing faster than CAPTCHA is . So they came out with recap. And that's what you, that's what you see today where you have like a nine box image and it says, Hey, click on all of these that have a sidewalk, or click on all of these that have a car and what it is actually doing in the backend.
[00:13:18] Kyle Smaagard: It's correlating your answers with other humans. So it's not validating. It's saying, well, if enough of these people say that these boxes were correct, then they're probably a human right. If you get enough of these. But what Google is using those to do is. Identify, Hey, now we've got a label data set of here's a bunch of people.
[00:13:35] Kyle Smaagard: Here's, you know, nine out of 10 people think this is a car. Eight out of 10 people think this is a sidewalk. That's a really good machine learning data set for predicting images. They did the same thing with housing numbers. . So that was when Google Maps, or Google Streets was a thing, they needed a way to associate addresses with these locations.
[00:13:53] Kyle Smaagard: So what they did was they wrote a little algorithm that would extract the house numbers, and they used that in those reCAPTCHA where, okay, well instead of just a static image we're gonna use the house number and you can tell us what the house numbers. So now we've, again, we've got a labeled data set, so Google has done very well at making, like you said, suckers label their data for it , but also providing a, you know, a useful service at the same time.
[00:14:14] Dave Hoekstra: That's what Google Photos has turned into, right. Google Photos is amazing because I can go into my photos and I can type in basketball and it pulls up every photo that I have.
[00:14:22] Dave Hoekstra: But a Google is also used every single image I have to train. Whether or not that there's a basketball in these photos, and they did it, you know, their genius was, is to give it away for. For quite some time. And now I have to pay them $20 a year for the privilege .
[00:14:37] Kyle Smaagard: For the storage.
[00:14:38] Kyle Smaagard: Exactly. Cause it's so useful to you, you wanna use it all the time.
[00:14:41] Dave Hoekstra: It actually is. And I'm a thousand percent. And it's funny you mentioned the reCAPTCHA's. I think all of us listening probably have that beat of sweat trickle down. When it asks you to click on the streetlight.
[00:14:51] Dave Hoekstra: In the streetlight, like one pixel of the streetlight is in one square. Do I click on that square? Do I not click on that square? So let me re encapsulate here. Machine learning. , I'm gonna, I'm gonna try to create an analogy here. Machine learning is like teaching a five-year-old to recognize something, but once that five-year-old recognizes something, it can continue to learn at a much more rapid pace than a five-year-old could, which allows us to, you know, nobody's gonna sit a in front of 10 million images and say, figure out which one of these contains, you know, a stop sign.
[00:15:27] Dave Hoekstra: A machine learning algorithm, once it takes the same type of effort to teach it what a stop sign looks like. But once it knows, it can unleash its capabilities and work much more progressively from there. Is that a fair assessment?
[00:15:41] Kyle Smaagard: Yeah, absolutely. And the, one of the biggest things that builds on it is you can then build on that.
[00:15:46] Kyle Smaagard: So once you've taught something to understand what a stop sign is, you can start teaching it the rules of the road, right? If it knows what a stop sign is and what a stop sign means. Now you start talking, well, if we've got, you know, a self-driving car, it needs to know what a stop sign is and what a stop sign means in order to assimilate that information and be a self-driving car.
[00:16:07] Kyle Smaagard: So all of these things that we build, and we do it at Calabrio too, everything we build builds off of previous experience. So those image recognition I was, I mentioned few shot and one shot learning out that is all built off of. Google has built this massive what we refer to as a model called ImageNet.
[00:16:25] Kyle Smaagard: That unders that has been trained on millions and millions of just generic images. and it may or may not know what a stop sign is, but the point is, it no, it has, it already has a concept of, okay, octagons and red and you know, S T O P. So if you put all that together and say, Hey, here's five examples of a stop sign.
[00:16:43] Kyle Smaagard: If it didn't know what a stop sign was previously, it now does. So it's very easy to build on the, okay, well, you already know everything about the world and I'm just gonna name things for you and associate things for you. Maybe, you know, it understands red, yellow, and green lights, but when we put 'em together, that's a stoplight.
[00:17:02] Dave Hoekstra: Did you ever see the movie, the Invention of Lying? With Ricky Gervais. Okay. It's, oh, is that the guy that's the first guy that's able to lie in the world? Yes. Yes. I saw that a long time ago. Okay. It's a, it's pretty great movie, but it's a really fantastic you know, kind of analogy on how. What I think people are afraid of with machine learning.
[00:17:24] Dave Hoekstra: Right. Machine. And maybe you can tell me if the word learning is a bit of misnomer, but the, you know, the fear is that, you know, if in the movie the Invention of Lying, for those of you who aren't aware Rick Ricky Gervais's character is in a society that where lying doesn't. People do not know that you can lie.
[00:17:43] Dave Hoekstra: So everybody tells the truth in every single scenario, right? And he goes into his bank one day and the bank's computer systems are down and she says, well, my computers are down, but how much money do you have in your account? And I'll give you what you need. Because there's no concept of lying whatsoever.
[00:18:02] Dave Hoekstra: And they, it literally goes to this animation where it goes into Ricky Gervais's brain and you see him figure out that he can. He could say, oh, well I have $10,000 in my account, so please give me a thousand dollars. And you could see his mind just goes crazy. Like this new possibility of lying to me.
[00:18:20] Dave Hoekstra: That's the equation I think a lot of people are afraid of when we talk about machine learning, is that this machine is gonna make this connection and say, wait, if I destroy all humans, I can live forever. Right. Or something like that. Like we start talking about the Matrix or Skynet and things like but reassure me here, it doesn't work that way does it?
[00:18:38] Kyle Smaagard: So what's in the movies and what you're referring to? That's what we, that's what's called generalized AI. It's basically the idea that you have something that knows everything. What we have today is very hyper. I mean, think of it like a super advanced five year old.
[00:18:53] Kyle Smaagard: It's still a five-year-old. . But the five-year-old you've taught. To understand dogs and cats we're a long ways away from it. Being able to say, okay, well we need to eliminate all of the cats or all of the dogs so that the cats can grow. Because I want the cats to grow. We're a long ways off from that.
[00:19:08] Kyle Smaagard: And it's hard to say, you know, will that become true? Highly unlikely. There's a lot of you know, things that are out there that people push out and hype up as, Hey this bot knows everything. As Meta learned just two months ago, they put out a chat bot. that allegedly learned from medical journals, medical papers.
[00:19:27] Kyle Smaagard: And when they put it out, they're saying, Hey, this is great. It can answer all your medical questions. It spewed off a bunch of stuff that was just wrong, right? So we're even in simple, somewhat sim it's not a simple task to summarize medical knowledge, but even in just regurgitating existing information, we're not able to do that very.
[00:19:45] Kyle Smaagard: So we're a long ways off of if it's even going to be a problem from it being a problem. Okay.
[00:19:51] Dave Hoekstra: Yeah. And, you know, the only caution I'd throw into the wind is that at one point in time we thought nobody could run a four minute mile. And we've we, we as a human species can do that easily. Now, for some of us, I, not me, I'm lucky to break the 10 minute at this point, but is it because a lot of it is based on kind of a goal.
[00:20:10] Dave Hoekstra: You know, it's it's because, you know what the goal is not to make humanity better. The goal is very specific. Identify a stop sign. Right. Is that what's kind of keeping it from going a little unwieldy at the moment?
[00:20:25] Kyle Smaagard: Yeah. You know, even that has its problems. So now we're back into reinforcement learning.
[00:20:30] Kyle Smaagard: So when you talk about goals, we're talking about the technique of reinforcement learning, and that's where you give the AI a task, you give it rules, and then you give it an outcome to optimize for. And in reality, this is sort of how. all machine learning works is you're optimizing for an outcome, but with reinforcement learning in particular, you're giving it a graded scale basically, and then allowing it to improve upon itself.
[00:20:52] Kyle Smaagard: There's a number of examples out there, and I wish I could remember more of the specifics, but lots of people have taught AI to, you know, optimize, you know, solve Mario. And it does it really well. It can optimize it for that. But there are games out there that the AI launch. because it finds a bug and it realizes, well, if I can use this bug, I can optimize, you know, the time to the end of the level or anything.
[00:21:12] Kyle Smaagard: So what's key is, hey, you need to define those rules and define that objective for what you're really trying to do, because it will do surprising things. Now, is it going to be malicious deliberately? No. But now we're back into your, well, is it unintentionally going to say humans are bad because they're causing problems?
[00:21:29] Kyle Smaagard: It makes for a good movie. Is it actually reality? I guess we'll find out . Yeah. Cause that was the
[00:21:34] Dave Hoekstra: whole premise of war games, right? That, yeah. That was the whole premise, is that the only way to save humanity is to destroy humanity. Right. And, you know, we then we get into, you know Asimov's Law of Robotics.
[00:21:46] Dave Hoekstra: We get into, you know, all the funds. Stuff we're talking about, you know, Philip k dick novels here and really kind of going off the edge. And that's kind of what, you know, this is mostly from my own curiosity standpoint. Does this get talked about in Ml AI circles do.
[00:22:00] Kyle Smaagard: Oh, for sure. And speaking of war games, I mean, it ended on a good note, right?
[00:22:03] Kyle Smaagard: They taught the, you know, they, they taught the machine that, hey, there, there's new rules to, to this war game. If you will. And if you play out all of objective scenarios, there's no winning unless you just don't play. So yeah, there's a, you know, a good ending to that movie.
[00:22:19] Kyle Smaagard: Yeah. Way to pull that. Way to save that one. . I haven't seen war games a really long time, but I do remember that's a classic movie. So yes it's absolutely at the top of mind. So there's actually a conference going on right now called nps. And NPS is sort of the AI conference, if you will.
[00:22:37] Kyle Smaagard: You know, years ago, and this probably won't mean anything to you, but some of the popular things, wave to vec, which was a technique for converting audio signals to digital information was an outcome of that. But what's a big topic right now is the AI Bill of Rights, and that's talking. How do we safeguard, how do we put rules and policies and procedures in place to safeguard these things?
[00:22:59] Kyle Smaagard: Because as we've seen, oh, if you recall, do you remember Tay Microsoft's, Tay I that they put on Twitter? So f. I think it was probably like four years ago. Is
[00:23:09] Dave Hoekstra: this
[00:23:09] Dave Hoekstra: the one that got into the questionable questionable words that I remember reading something about that.
[00:23:16] Kyle Smaagard: Yes. It got real racist
[00:23:17] Kyle Smaagard: real fast.
[00:23:17] Kyle Smaagard: So this is an example of when you don't set boundaries it. These things will just learn from what people feed it. And you have people who think it's funny to be, to feed malicious data. Cuz it's funny, they hear a computer repeat bad things. Similar to some people thinks it's funny to teach kids bad words, right?
[00:23:34] Kyle Smaagard: Yes. Those are the things. So as we're building these things, how do we put in safeguards? How do we think about what are all the, not just the legal ramifications, but the moral ramifications? And we talk about that all the time at Calabrio when we're talking about, you know, our predictive evaluation scoring tool that's out there.
[00:23:52] Kyle Smaagard: How do we make it so it's not a self-fulfilling prophecy? We don't wanna make a tool that rates agents, as you know, scoring them as zeros because then the managers will then look at it and be like, well, you always have a zero. And then the agent is gonna think, well, if I always have a zero, why do I bother trying?
[00:24:06] Kyle Smaagard: And then they don't try. And now we've got a self-fulfilling prophecy with that agents basically taking them out themselves out of the agent pool because they don't believe they have an impact. So it's those kinds of questions that. We always converse about at Calabrio and figure out how can we safeguard this to make sure.
[00:24:24] Kyle Smaagard: The tool that we build isn't going to be used in a manner that causes problems for the agents because that's the last thing we want. We don't wanna make an agent's life harder. We wanna make their lives easier. We wanna make the supervisor's lives easier. We wanna point them and say, Hey, here's a direction that you can go explore.
[00:24:39] Kyle Smaagard: But we don't wanna say, Hey, here's an agent that's really bad. When the agent actually isn't really bad, but maybe they just had a bad day. So how can we, if we're providing input onto perhaps an agent scoring mechanism, how can we couch that with, okay, well, they were evaluated on two calls and they took 300.
[00:24:58] Kyle Smaagard: Give additional context to say, Hey, this perhaps isn't representative of the agent overall, or, you know, we just look at all of it and you know, instead of just one or two.
[00:25:08] Dave Hoekstra: Right. And I know, you know, you and I have talked about this a lot about how, you know, machine learning relies pretty heavily on large data sets, right?
[00:25:19] Dave Hoekstra: And you know, for an organization that you know, score. 200 calls a day that in our industry, that's a large data set, right? That's a lot of calls that get, but that's typically not enough for a machine learning kind of algorithm to kind of kick in and piece. So how do you work with that kind of restriction?
[00:25:39] Kyle Smaagard: Well, I, this is where it comes down to we try and procure more data . So when we're talking about data that's input, so let's take our most up to date sentiment model. For example, we have over a hundred thousand examples of contact center specific. Data that we're feeding into a pre-built language model that is trained on millions, if not billions, of other examples.
[00:26:01] Kyle Smaagard: So there's a ton of data feeding into these systems. And what we really try and do is when we're looking for, let's take sentiment for example. Sentiment is actually really rare in contact center conversations. If you look at, on a sentence by sentence basis only about two. Of any sentences across all of our data is negative.
[00:26:21] Kyle Smaagard: So finding that is, is tricky in of itself. And we have to employ a lot of techniques to make sure, hey, we're finding enough of those negative examples so that we don't just say, if 98% of our stuff is not negative, then we can always just say it's never negative. Right. That's not a good, that's not a good model cuz then you're never finding the actual negatives.
[00:26:41] Kyle Smaagard: So we have to employ all these techniques to make sure we're finding enough data of the important what we call classes and the exam negative, positive, and neutral for centimeter, what we refer to as classes. That's,
[00:26:51] Dave Hoekstra: it's so interesting, right? And so the speaking from a kind of, Context center perspective, what are some of the, maybe an example of one of the more fun things you've worked on and then maybe an example of something that didn't go the way you hoped it would.
[00:27:05] Dave Hoekstra: Curious to kind of round this into some examples of how we can make the audience here kind of
[00:27:11] Kyle Smaagard: conceptualize what we're talking about. You know, I wish I could talk about things that don't go very well, but generally the things that don't go very well, there's still ongoing research and we don't like to, we don't like to talk about ongoing research because then people will go out and try and sell it or sell it as a feature.
[00:27:25] Kyle Smaagard: But also, you know, with. We're really a research team, right? So when we think of an idea, it might be six months, it might be three years before it comes out. And I'm gonna, when we talk about successes, I feel like, you know what, I'm gonna bring up , the phrase optimizer right, is my favorite example.
[00:27:39] Kyle Smaagard: That was a an idea that was generated over three years ago now. With the when we're looking at the phrase manager and thinking, all right we, I saw an example of how someone was using the phrase manager to, to get insights from their data and thought, Hey, there's a better way to do this.
[00:27:57] Kyle Smaagard: Now it took us three years to get to that, and we had to rely on a lot of new technology. But we eventually got it working and got an amazing, what I think is an amazing tool that really helps. Like when I, I saw someone use it and they were looking, they pulled up a category. I said, Hey, pull the category and generate it and let's see what happens.
[00:28:16] Kyle Smaagard: And she looked at the results and she was like, She pointed out one and she's like, I never thought that when people called asking about a particular service, they would reference buying a product. Like it never occurred to her that, but it belonged in this category. And she was just like, this is perfect.
[00:28:32] Kyle Smaagard: And that feeling like we got like, this is amazing. This is exactly what we designed it for to be able to just click on one thing. 10 seconds later, you have something that you never would've thought of.
[00:28:41] Dave Hoekstra: I just imagine the scene like when the Mars rover landed and you see the NASA controller and every Yeah, everybody heads go up and champagne bottles.
[00:28:51] Dave Hoekstra: When, whenever a customer comes back to you with that kind of feedback that's pretty amazing. And that's, yeah. But go
[00:28:56] Kyle Smaagard: ahead. No, I was gonna say, you know, speaking of failures, we had a lot of failures on that road. If for everyone who's listening who's not familiar with phrase optimizer, I should probably mention phrase optimizer takes a bunch of phrases ostensibly with our phrase manager tool that you said, Hey, I have a category I wanna understand calls that fall into this category. And we use phrases to, to identify those, but to generate those phrases, typically our customers have to pull, you know, five people in a room and then generally they have a whiteboard session. Say hi. If we're talking about escalations, how are, what are all the different variations of the ways people can reference an escalation?
[00:29:30] Kyle Smaagard: Well, I wanna talk to a supervisor. I want to talk to a manager. I want to talk to your supervisor. I want to talk to your manager. So when we're talking about these phrases, All sorts of variations that you have to think of, and it really relies on a couple of different things. One, your command of the English language.
[00:29:44] Kyle Smaagard: If English is not your first language, people struggle with this. How do you get all the tenses? How do you get all the synonyms? And two how much do you understand? How your customers actually talk about these concepts. When they talk about, do they reference supervisors or do they reference managers?
[00:29:59] Kyle Smaagard: If you're in a, let's say you're a banking institution, do you understand that? Hey canceling a credit card is something that generally leads to escalations. . So there's all sorts of institutional knowledge. So the phrase optimizer was designed to really surface that and give you a, an individual, not an individual, I guess we're talking about, give you a tool if you will, that can sort of.
[00:30:23] Kyle Smaagard: Generalize the the command of the English language and say, Hey, I have an English language expert that, oh, by the way, understands our industry and contact centers. In part particular, I'm going for escalations. Let me feed it a couple of examples, and this export is going to give me 10 things that it thinks are similar to what I'm looking for.
[00:30:41] Kyle Smaagard: And I can very quickly go through and say yes. No. Give me. based on the new dataset. So that's what phrase optimizer does. But along the way, we had lots of, we tried a lot of ways of doing it and there were a number of instances where we would try and say, Hey, here's a bunch of examples.
[00:30:55] Kyle Smaagard: And then it would just go off the rails and give us, Hey, I think this is similar of things that were just crazy, not representative of whatsoever what we were going for. So there's a lot of, okay, well this didn't work out at all. How can we generate a new, how can we feed in additional data that is more relevant that will give us a.
[00:31:13] Kyle Smaagard: That gives us some better answers. So there's a lot of iteration, a lot of failure on the way to what
[00:31:17] Kyle Smaagard: we built,
[00:31:17] Kyle Smaagard: But at least it never got inappropriate, like the Microsoft one, right?
[00:31:21] Kyle Smaagard: No it didn't. Mainly cuz the data that we're feeding it isn't really, like, there's not really a lot of opportunity for inappropriateness in the context center data that we feed until
[00:31:29] Dave Hoekstra: we start talking about those escalations right now.
[00:31:32] Dave Hoekstra: So this has all been utterly fascinating and I think as kind of a final wrap up, it's interesting how. Well, the analogy between. Teaching a machine to do something and raising children are similar. You know, you mentioned earlier how we need to teach the machine not to be inappropriate.
[00:31:56] Dave Hoekstra: Like you said, we teach a kid to say a swear word and we all think it's hilarious the first time. So the child receives positive reinforcement that this is something they should do. Because we laugh and then we laugh the second time. Then the third time it's like, whoa, maybe. So we have to go back. We have to curb that and we have to teach.
[00:32:15] Dave Hoekstra: We have to expand the usage. And it's funny how almost everything you described about machine learning falls directly into that same exact approach. It's like, we can't go over there. No, that's bad. We gotta steer you back over here. Is that about basically what you've what you've learned in your time with machine learning?
[00:32:33] Kyle Smaagard: It absolutely. Because there's nuance, right? Your swear word example is great. Like, Hey we've taught you this. You now understand what it is. Oh, you just used it at Christmas with grandma talking about a present, right? All right. There's a little bit of additional context. Yes. You need to understand when you know what a swear word is.
[00:32:52] Kyle Smaagard: Now let's teach you when it's acceptable to use it versus when it's not acceptable to use it.
[00:32:57] Dave Hoekstra: I'll never forget, this is just my own personal story. We were at a baseball game. The Texas Rangers were playing. New York Yankees and my nine year old son was with me and the chant of Yankees suck started going on and my nine year old son turns and looks at me and he says, dad, can I say it?
[00:33:18] Dave Hoekstra: And but it, it's a great example of. In his mind, we had taught him the context of now he's starting to get when it's appropriate and when it's not. And I said, yes, you can say it in this time. And he boisterously cheered for the Rangers. But it's such a great example of how that works.
[00:33:34] Dave Hoekstra: Well, Kyle, this has been. Awesome and fascinating, and I feel like we could go for a lot longer, but we want to, in the interest of keeping this fairly manageable one of the things I always like to do is kind of give you the final word. Is there anything that out there that you wish people knew or recommendations or anything?
[00:33:51] Dave Hoekstra: This is, it's your platform. Go for it.
[00:33:54] Kyle Smaagard: Yeah, I mean, what I would throw out there from a Calabrio perspective, we have the beta cloud the innovation Center, beta Cloud, which has a lot of our new tools out there. I mentioned phrase optimizer that's out there. You can check it out. We've got a lot of interesting things.
[00:34:08] Kyle Smaagard: That are out there that you can play with, that are all built on the work that, that our team is doing here. So I encourage you to go check it out and then reach out to us. We love feedback. When things are working, when things aren't working what are some surprising results that perhaps you have seen or maybe inappropriate results that we should be aware of?
[00:34:24] Kyle Smaagard: Cuz we, we love seeing that cuz then we know how do we improve these things?
[00:34:27] Dave Hoekstra: Or maybe you have an extra large data set that Kyle and his team ge their hands on.
[00:34:31] Kyle Smaagard: Right. Absolutely. Yeah.
[00:34:34] Dave Hoekstra: Yeah, that's what
[00:34:35] Dave Hoekstra: We're always on the search for. Another data set. So, no Kyle, this has been amazing. I think a lot of learning a lot of people are very, gonna be very interested in kind of the message of what we're talking about today.
[00:34:45] Dave Hoekstra: And so I'll just. Tell you as we wrap this episode up, don't be a stranger. Let's keep talking about this. And as we learn new and amazing things here at Calabrio and beyond, let's make sure the world knows about 'em and that we can we can spend some time talking about some of the new advances in the ways we can potentially help our customers.
[00:35:00] Dave Hoekstra: So, I certainly really appreciate your time. Thanks for joining me.
[00:35:05] Kyle Smaagard: Yeah, absolutely. Thanks a lot, Dave.
[00:35:06] Dave Hoekstra: All right, so everybody this listening. Thank you guys as always for your time and your energy. We really appreciate it. And as always, you can contact us @Calabrio.com. If there's a podcast episode you want to hear or you have a question about your contact center deployment or even you just want to talk that's what we're here for.
[00:35:21] Dave Hoekstra: So please look us up, let us know what you want to see, and we'll be talking to you soon. Thanks again from the Calabrio podcast team. We appreciate it and we'll talk to you on the next episode of Working Smarter from Calabrio. Thanks
[00:35:34] Dave Hoekstra: everybody.