Programming Throwdown

{{ show.title }}Trailer Bonus Episode {{ selectedEpisode.number }}
{{ selectedEpisode.title }}
|
{{ selectedEpisode.title }}
By {{ selectedEpisode.author }}
Broadcast by

When you ask Alexa or Google a question and it responds, how does that actually work? Could we have more in-depth conversations and what would that look like? Today we dive into conversational AI with Joe Bradley and answer these questions and many more.

Thanks for supporting the show!

Show Notes

When you ask Alexa or Google a question and it responds, how does that actually work?  Could we have  more in-depth conversations and what would that look like?  Today we dive into conversational AI with Joe Bradley and answer these questions and many more.

Thanks for supporting the show!

00:00:15 Introduction
00:01:24 Introducing Joe Bradley
00:04:44 How Joe got into Conversation AI
00:21:35 Zork and WordNet
00:27:48 Automatic Image Detection/Captioning
00:39:31 MuZero
00:45:27 Codex
00:50:15 GPT and businesses
00:55:16 Artificial General Intelligence
01:00:05 What is LivePerson
01:16:30 Working at LivePerson
01:21:18 Job opportunities in LivePerson
01:27:04 How to reach Joe
01:32:40 Farewells

Resources mentioned in this episode:

Companies:

  • LivePerson: liveperson.com
  • PyTorch: pytorch.org
  • TensorFlow: tensorflow.com


★ Support this podcast on Patreon ★

What is Programming Throwdown?

Programming Throwdown educates Computer Scientists and Software Engineers on a cavalcade of programming and tech topics. Every show will cover a new programming language, so listeners will be able to speak intelligently about any programming language.

[00:00:00] Patrick Wheeler: Programming throwdown episode 122 building conversational AI's with Joe Bradley. Take it away, Jason.
[00:00:23] Jason Gauci: Hey everybody. So, one of my most fond memories is playing this game , this really esoteric game called Essex, which I think is named after a place in the UK. But it was this game where it was like, Zork, if you are familiar with that, but it's one of these text-based story games where you would type, you know, go north, go west.
And, and the thing that amazed me, I mean, I was maybe seven or eight years old. The thing that amazed me was that I could, I felt like I could just say anything to these computer players. And and that they would respond with really interesting things and it ran on a single floppy disc. And so, you know, honestly to this day, it kind of blows my mind.
But as a child, it just completely blew my mind. I mean, I was wondering if there was a real person somehow involved, like, you know, in real time and I've always been really interested in conversation AI, and I feel like we're all just exceptionally lucky to have Joe Bradley here, who is the chief scientist at LivePerson, who is an expert in this field and really going to talk to us about how conversation AI kind of works a bit of the history behind it and you know, how kind of a LivePerson and other folks do it today. So thanks so much for coming on the show, Joe.
[00:01:41] Joe Bradley: Hey, thanks for having me. I'm really excited to be here and even more excited about the fact that you just brought up Zork. For me, that was the one, it wasn't the, you know, the Essex or that I didn't know about. But yeah, I spent hours doing that as a kid and it's funny too.
I have children myself now and I see them, you know, just last week I went downstairs and they're not playing Zork. Right. But they're playing the equivalent on, you know, the Amazon device on the Alexa. Somebody coded up a little like exploration game, and now they're talking to that, but it's essentially the same interface just with the voice instead of text.
And it's still fun for them, which I think is like really interesting. Even in this age of like, Hey, does everything have to be these like cool looking graphics? And you know, what's the UX gotta be like, there's still something powerful about just talking to a machine. Like you can like, like you want to and in a way that's easy in a way that's natural for you.
And then having the thing responds and tell you a story. Right. That's cool.
[00:02:40] Jason Gauci: Yeah. I think that, you know, when it's abstract like that, you kind of, you know, your mind fills in those gaps and fills them in a way that's like really interesting and pleasing to you. Right. And so that's kind of the I think who's the Scott McCloud.
I want to say if I'm getting that right. But he wrote a book about understanding comics and he explains that the reason why comics are, you know, originally they're meant to be written really quickly but another reason why comics works so well as a medium, is that they're so abstract that when you watch, for example, Dilbert, you kind of put yourself into Dilbert, versus if Dilbert was photorealistic, you wouldn't really be able to do that as easily. And so yeah. Games like Zork and Essex, are they do that. They, your mind fills in all those gaps.
[00:03:27] Joe Bradley: Yeah. They hit this spot for us. That's like, Yeah. If you think about how evolved we are to have conversation, right.
And how important conversation is to us as like, as human beings, right? It's the fundamental way in which, you know, we created efficiency, resources, wealth, right? All the things that we have today is, are founded on the abstractions that make conversational conversations possible, right. Language itself, and then the ability to have those in a two way or multi-way dialogue so that you can build more than just what's inside your own mind.
Right. So, so I think in some ways it's like surprising when I look at my kids playing Zork on a device. And in other ways, it's, you know, it's like, oh no, wait, we're evolved to do this. Like, this is what we want. We want to build stories. And now it's just interesting. It's kind of interesting that last 30, 40 years we've begun to be able to do that within an objects.
Right. And that, that, like, we sort of know, and we sort of don't care and it's sort of still fun.
[00:04:25] Jason Gauci: Yeah. It's amazing. So, so I want to dive into. I would love to know how these things how's Zork works and then also fast forwarding to the future. But before we get into the tech side of it, you know, what is kind of your background and what led you to building conversational AI at LivePerson? What's that story like?
[00:04:44] Joe Bradley: Yeah, it's a little, I don't want to say topsy turvy, cause it's not a bad story in my mind, but it's definitely a little all over the place. So I you know, if you go back far enough, you start getting to, you know, like a, like an interest in the opera singing and, you know, in a major in English literature and a whole bunch of stuff that like doesn't fit very well with where I am today.
At least not in most people's minds. So for me, you know, it's like, I tend to find like have moments of like, five-year kind of moments in my life where I get really passionate about something and those end up like being a stepping stone to kind of somewhere else. And so when I was you know, at an undergraduate school and starting, you know, working as a classroom teacher in San Francisco, I got reminded about how, you know, it's teaching math and science to seventh graders, which is by the way, the hardest job I've ever done by a factor of 10 you know.
[00:05:42] Jason Gauci: It sounds difficult. I, I remember middle school as being one of the hardest points I've only ever been in one fistfight in my entire life, other than martial arts, which doesn't count. And it was in middle school because somebody, oh, I was playing basketball and I blocked somebody because I'm a very tall, but I'm not particularly athletic, but I do have the height and I blocked someone and he was upset.
And actually, I wouldn't really call it a fistfight. He just punched me in the face and I was really upset about that. But yeah, but I think middle school is extremely challenging age and you know, Patrick and I also have kids. And so that's an age of not really looking forward to, but we'll have to get through it.
[00:06:23] Joe Bradley: Yeah. I mean, a couple of like, I will not forget the day one of my students, she was this probably about six foot tall, 12 year old girl looked at me and she said I was just getting started. And I, so it's like a, you know, like I said, it was this undergrad English major who was like, I want to go be helpful.
I want to go do something meaningful in a, you know, in, in a city. And I had just moved to San Francisco. So I got this job teaching and I was a couple months in, I wasn't, I was not good at it yet. I hadn't figured it out. And she looks up at me and she says, Hey, Mr. Bradley, this class is just like WWF Smackdown.
And I like, I mean, I hung my head that day. Like it was not like, cause she was right, right. I was like, wow, this is this is not good. And I mean, thankfully the institution of public school in San Francisco was very helpful. They brought me this mentor teacher who had all this knowledge about how to work with kids.
It's really just like simple things that you'd never think of. Right. Like they don't talk about what one kid isn't doing. Well, stand next to them and talk to the kid who's doing it. Right. And praise them. And then the other kid's going to suddenly look up and want to do the right thing. You know, all these sorts of tricks of the trade that were just like changing life changing for me.
But anyway, yeah. So to get back to your class, You know, I, I sort rediscovered a love I had for math. I studied it a bit in college and I was doing teacher training on, you know, negative adding negative numbers or something like that. How do you teach a 12 year old how to do that and what would a good visualizations and all these things.
And it just brought back to me like, oh man, like I have so many questions and so much further. I wanted to take that, that, that led me back to school. And so I went to school for a master's degree in mathematics, you know, at SF state university, San Francisco state university, part of the Cal state system which I think deserves a plug because I feel extremely fortunate to have been able to go get an advanced degree in mathematics and pay something at the time, like 800 bucks a semester to do it.
[00:08:15] Jason Gauci: Oh my gosh. Amazing.
[00:08:16] Joe Bradley: It's more now I know it's more now, but I think it's still a pretty good deal. And back then, it's, you know, that's life changing stuff, right? Like these are cultural institutions or, you know, that we have built for ourselves that affect people's progress in life dramatically.
And so that was a special, I think that's a special institution for me, that was a special time you know, and that kind of led into physics, right? Like I wasn't doing the math and I was like, well, I want to do something with this math as, as fun as the abstraction is, I've always kind of been attracted to abstraction.
I want it to be good for something. So I ended up going up to university of Washington, moving up to Seattle and working on a PhD in physics. And that kind of led me to. You know, a lot, obviously more applied mathematics, obviously beginning to work in statistics, much more deeply because I ended up doing a bunch of experimental work.
And, you know, in addition to turning bolts and building instruments and firing x-rays and lasers and stuff, like you've got to go do the data analysis. And we started to think that there were some, you know, more advanced statistical analysis that would help us understand convolutions of data and all these things.
So that sort of started to filter in. And as I became a full-time scientist, I just found a lot more passion for that when I was working with the national labs and there was one, like, sort of component of a larger decision that led me to, to change tracks and ended up kind of in, you know, in the industry I'm in today.
And that was kind of by way of Amazon, right? So I sent all these resumes out because I was interested in us getting, get machine learning. It's a live field. It's new. I think I, you know, haven't been reading the papers I've been like, this is a fun time, you know, as opposed to physics, which is like hundreds of years old and everything's super self-disciplined and, you know, very narrow and very tight.
And it takes a long time to approach anything state-of-the-art I was like machine learning, well, this is different, right? This is new. This is live. You can, you know, you can learn it, you know, not quickly, it's not easy. I don't mean to say that, but you can get to the front of the discipline, you know, much more effectively.
And I think there's just a lot of like ways in which knowledge is shared in the ML community. And they're pioneering like this sort of academic research and shared code and all these things that like wasn't really happening in physics that, that starting to.
[00:10:27] Jason Gauci: Yeah, I think you're seeing the same thing with economics.
So I've always been really interested in economics. And and what you're seeing is ML start because ultimately economics is especially micro economics is ultimately about people and the way they behave. And so you need to, it's statistical by nature because we're not going to have a human brain processor, so we're going to be estimating that.
And and so yeah, you're seeing ML make a ton of strides in that field. And I think at some point ML will diffuse and just be a part of all of these fields and ML, as we know it now will become more of like a core, you know, how do we make the learning really well, but I think we'll get to the point where maybe even conversational AI won't even be ML. It might be. Part of like a speech understanding or something like that.
[00:11:17] Joe Bradley: Yeah, I think that's right. I mean, I think if you really back up and think about what machine learning is, and even if you, even, if you think about that in the context of the neural nets, what are they really doing? They're taking these, like these qualitatively different kinds of problems because they have such a high degree of dimensionality in such a weird set of correlations across the variables that describe them.
And then they're trying to find these like really useful. Lo local potential minimum in them, right? Like these really useful like solutions, whether or not they're, you know, the perfect or the global, you know, I mean, I know there's a whole branch of ML devoted to finding, you know, complex problems and all that stuff.
Right. Like, but I think the way the fields of all things, we realize that like real problems are like way too dimensionally complex to solve in those exact ways or to even approximate towards those global solutions. And we're all just okay with sort of these local solutions, but that's gonna like, like any field that has anything approximating a real problem and most good fields do it's going to benefit from that basic approach.
I think, I mean, obviously creates new problems of like, well, how do you know if your answer is good enough and what does good enough mean? But for real problems, like those are, you know, questions you just have to cope with.
[00:12:28] Jason Gauci: So, so you didn't have a CS, don't have a CS degree, correct?
[00:12:34] Joe Bradley: No. I'm a. Like a research self-taught programmer at start anyway. And I got a little bit of a crash course about how bad a program or that made me when I worked at Amazon. For sure.
[00:12:45] Jason Gauci: Yeah. Let's go let's back of a tiny bit. Well, how did you interview at Amazon? Because this is something we get asked constantly and, you know, someone says, I have a background in physics, or I have a background in economics, or I have a background in chemical engineering.
And so, and I want to interview at Amazon. How did you get that right mindset to be able to do that?
[00:13:06] Joe Bradley: Yeah, so, so, it was a case of like, there's a little bit of luck in there, but I also was intentional about how I did it. So I can't claim this will always work, but I think it was a relatively smart strategy.
So I'll explain a little. So I did, as I said, I sent out a bunch of resumes and heard very little back. Right. I was like, who's this, you know, basically I think people would see that resume and be like, who's this national lab physicist, like do an x-ray experiment, like yeah get this thing off my desk is like, was kind of an easy response for most sane humans to have.
[00:13:33] Jason Gauci: Or sadly it's even an AI so a person didn't even have an opportunity to see the potential, right?
[00:13:40] Joe Bradley: Could be. I mean, we definitely our recruiters at LivePerson, right, they look at a lot of resumes still but I think there's search queries. You know, you gotta be smart about how you set up your resume. You want to have the right search terms on it, all this stuff like you gotta get past, you gotta get a human looking at it.
[00:13:53] Jason Gauci: That's true.
[00:13:54] Joe Bradley: And then what I did is I wrote, even though it wasn't asked for or common at the time, and it's probably less so now, I wrote what I thought was a thoughtful cover letter explaining what the heck I was doing there and why the heck I thought this made sense. And so I tried to break down my experience in particular. I spent, you like a while crafting this paragraph about how I had been doing these experiments in a national lab context where we'd go in and we'd have limited x-ray beam time and we'd sit down and we'd have, you know, 24 hours to conduct the experiment. And we had to be very thoughtful about how we planned it.
We had to be very good at troubleshooting problems. We had to, you know, we had to really deliver in this, in this very tight constrained environment where we were not going to be successful. And my pitch was like, here's, look at what I've delivered. Here's the output of these publications, this advance in this field.
Right. So I tried to get, I didn't want to make that like a four page monologue, right. That was like, had to whittle that down to, you know, to four or five good sentences. And there was some recruiter I knew her at one point I forgotten her name now because it's been quite awhile. It'll probably come to me after the call, but there was this woman who was a recruiter who saw it and read it and just thought, like, she was just like, I think this could work. Right. I think we should talk to this guy. And I, I did talk to her, as I said, I talked to her, you know, later after I joined Amazon, I had a couple of conversations with her and she kind of relayed this to me that that reading some of that text was important.
It was an important part in her decision-making. It wasn't the only thing of course. But I think that's smart to do. I think, you know, obviously you got to get past the sort of AI or the querying, all that stuff. But then you should remember, there's a, if you remember, there's a human on the other end of this and they need me to make sense.
If I'm not coming from what they see and what they expect, they need me to make sense of why my experience fits in and how that story should go together. We're all narrative machines in the end, right? So they need me to give them that narrative and take the time and the space in your resume or in a cover letter.
However you do it to, to make sure that story is told and make sure that story is clear, but also make sure it's like super concise and tight. Like I can't make that point strongly enough if you write well you, you that also says something and if you don't have a lot of attention, you sort of ads and you've got 30 seconds of attention, if you're lucky, how do I tell that story that quick?
[00:16:10] Jason Gauci: Yeah, absolutely. And so when you went to Amazon, where you working on conversational AI on day one, or is that something that you, a team that you transitioned to later?
[00:16:19] Joe Bradley: So I ended up doing, you know, a number of natural language applications at Amazon and at, um, at Nike subsequently, right? A lot of it related to how to, how people are, you know, what, what interests, what kinds of interests they have in products?
What, you know, what's useful to them, a product, things like that. What their website experience was like a lot of analysis of texts. And you know, I started building conversational AI with some of the experts we have at LivePerson. You know, that my real first foray into conversational AI was with LivePerson, like that's where I began to build.
And, and we've brought on a lot of people. So I consider myself a little bit more of a generalist. You know, my background is more statistics, machine learning, et cetera, and obviously have learned a lot about conversational AI in the last three, four years working at LP. But we of course have people that have spent their whole careers on it as well.
[00:17:12] Jason Gauci: Got it. Okay. So if you could kind of lay the land for us here. So there's natural language processing, which I think, and please correct me here is, is a, is an umbrella that covers, let's say translation maybe embeddings or semantic understanding because of language and, and conversation AI would also be under that umbrella of NLP, is that correct?
[00:17:38] Joe Bradley: Yeah. I think that most people would agree with that. It's like most designations like this, where there's always, you know, somebody who feels like "no, no, my under, you know, my, my semantic description of this field is more accurate", but,
[00:17:50] Jason Gauci: Right. Yeah.
[00:17:51] Joe Bradley: But yeah, I mean, by and large NLP is like a big box and inside that box goes like the science of processing natural language, right. Like kind of quite literally and, and conversation AI, as a kind of processing of natural language. It's a very specific one, obviously a very important one we think. But yeah. Yeah. I think that's fair. You also have an insight, a conversational AI and, or insight of natural language processing. You know, you have the sub field of natural language understanding, right.
Which is which is again, very important to conversation AI and that's sort of this you know, again, it's another one of these ones that like, is what it says it is. Right? So it is the science of trying to, know, use use algorithms to create. You know, what do you, what do we mean by understanding essentially a structured form of, or a structured interpretation of words, of natural language that we use, you know, go from this like unstructured text, you know, go from that format and sort of begin to categorize and, you know, in adorn these natural language utterances with like things that are meaningful lesson to us in the abstract.
[00:19:01] Jason Gauci: Yeah. That makes sense. And then in the other direction, then you're talking about natural language generation where you say, I have this structure, this Jason which has this information that I know somebody needs you know, the movie starts at this time, or this is when your pizza order is ready and that has to get turned into texts that people would like, uh, it would feel natural to them.
[00:19:23] Joe Bradley: That's right. Yeah. You've got that, you know, the sort of semantic box of metadata. However you get it, you know, wherever that arose from some system did some processing and said like, here's the semantics or the meaning of what I want you to say machine. And then the NLG post portion, like basically input is that output is spoken language or written language or whatever it is the other piece in the middle there that, you know, people can slice and dice in different ways.
And it's still a very open sort of research question is really around all the dialogue management, right? So you have the understand that you have the text coming in or the words coming in, and let's assume it's just written texts. Cause then, you know, you can always kind of model the speech and the voice side as, at the other end of this pipeline.
So you have texts coming in, you have this understanding capability whose job it is to, know, decide what it means that what it means, what was said. But what was said means, I guess it's the best way to say that. And then that comes in a structure now that interacts with the dialogue manager or, you know, and again, that can kind of be a few pieces but broadly speaking, a dialogue manager's job is to take that information, understand it in the context of, you know, what's going on in the conversation up until now.
And some managers do a lot of that. And some managers do none of that, but logically that's the responsibility of this piece of the system. And then to turn that into a next action and then that next action gets passed out. As you know, like one element of that next action, there may be lots of things that happen, right? It may go call an API and check your balance for your bank or whatever it is. But it's also gonna, you know, result very likely or most of the time in wanting to send you some texts back. And so, you know, in a system that has a uh, a decoupled NLG component, as you mentioned before, it will send back now this metadata blob and the NLG job will be going to go and turn that into a language and thus kind of conversation.
Right? So, so those, I think are the, like in big animal letters, so to speak, those are kind of the big blocks here. As far as a conversational AI system goes.
[00:21:35] Jason Gauci: Yeah, that makes sense. So, so I think if I remember correctly, Zork didn't have conversation. It was, it was more of like a pickup sword, hit troll. I mean, it's been a long time since I played that, but, but there, there have been interactive fictions with conversations. From a long time ago, like this Essex one, I'm sure there are other ones of its era and trying to unpack it today. I think what they were doing is they probably had some WordNet ontology you know, on the disc.
And so I think maybe what they were doing was, you know, they were taking your sentence of what you told the avatar and they were looking at each word and they're maybe traversing up this word. So just a bit of background for folks. WordNet is basically a tree of the English language and it's been around for a very long time.
So you can imagine, I think the root word is entity. If I remember correctly. So everything is a child of entity and you know, there's objects and abstractions. So if you were to take. Like happy. It would be an emotion, which is an abstraction, which is an entity. And so you can kind of go up this chain.
And so if our, if I asked this avatar in this game for the 1980s, you know, are you happy or are you a, I don't know, I guess elated or something like that. But they would do is they would go up this WordNet chain. And then they probably by hand kind of thought about what to do if they get these really high level words.
And they were able to kind of cover, you know, a pretty broad range there. And so my guess is, or something like that. And they were just pulling out individual words and then if they see that word, then they know, okay, this person asked me about the key or the person asked me about being sad. So start the sad narrative or this person can go on this quest to make me not sad anymore. Things like that.
[00:23:37] Joe Bradley: Yeah. Yeah, no, I think, and I think he describes then like a lot of, you know, what natural language understanding, you know, was, and kind of had to be for a long time. Right. And like that there was, there were sort of techniques like that. I, you know, a lot of like different, very ornate ways of dealing with language because language is exceptionally complicated and dialogue even more so.
Right. Like just the understanding piece of this has been a extremely important and huge research effort for a long time. Right. And I think a lot of times, a lot of times we sort of claim it's a lot more closed down today than it really is. Like that's sort of something interesting to talk about too is like, how good are we at natural language understanding how good are we at dialogue handling?
And my answer is we're not as good as we think we are neither in the academic nor in the professional context. And like part of doing this work well, is. You know, is beginning with that recognition and then realizing that we have to build, you know, tools and capabilities to help us get to the level set where we kind of were many of us kind of already believe we are.
And I think there's a lot of reasons for that there's hype in both the professional and the academic context, which would also be interesting to talk about but I do, but you do see you know, material advances are very meaningful in the same way that we saw for computer vision. You know, starting back in 2006, you know, the beginning of a neural net approach and the first kind of, you know, early auto encoders, right.
That, that start to change that problem around. And then the development convolutional neural networks that, you know, where the networks are satisfies some of the symmetries of this, of the system itself. And that allows it to work really well and kind of, reshape the functional capacity of these networks well enough so that they can, you know, they make really good guesses at these approximate local minimum we were talking about before you start to see some of that happen in the last five to 10 years in the natural language space.
But it's hard. It's frankly harder in my opinion, because like, I can talk about what the symmetry is, a physical space are, right. Like, you know, move left, move, right. Move up, move down, rotate, reflect.
[00:25:49] Jason Gauci: Yeah. I mean, if you're training something to like play chess or something and you flip the board, you know, on its axis, Your strategy doesn't have to change or it's a mirror strategy. And so you could even just directly, you know, hack that into the system. And now it only has to learn half as much, but for language it's totally unclear what that symmetry is.
[00:26:10] Joe Bradley: Well, yeah. I mean, what is it like a symmetry in languages, like a synonym or an antonym. Right. And so, so how do you teach a computer to understand that?
Right. And I think what you start to break the problems apart, like. It starts to become clear, or at least you can make a good, intuitive explanation for why, you know, attention mechanisms are really meaningful and why the neural nets for some of these modern applications have developed the way that they have, because they're trying to solve some of these symmetry problems in the same way that you and I do, like, you know, you know, words through context, right.
And, and we're really good at recognizing like which elements of contexts like actually impact each other, even in like a long form piece of text. So, so I think we've started to learn some of that. And, and I think obviously there's. You know, a lot of major advances recently separating out like what's kind of hyper or what it means to have advanced with something like a GPT-3, for example, what, like what that really is good at versus what it's really not good at.
I think that's actually very hard for us because in some ways, because of the nature of what it's doing is producing, you know, such a such compelling text on its own. That it's hard for us not to imagine, know, like, like, uh, some kind of wizard of Oz entity back there. That's like all knowing that's doing that and really tease it out. Okay.
[00:27:30] Jason Gauci: Yeah. I thought the there's one research. I'm sure it's a whole body of research, but I thought it was really fascinating. I can't remember if it's using Gans or transformers. I don't totally remember, but it would, it would caption an image, so you'd give it an image.
And what would come out would be, you know, a girl is sitting on a red swing, you know, talking to a boy near a tree, And I thought that was so cool. I mean, when I saw that, that blew my mind and I felt like that I mean, well, I actually, it would be great to know for your opinion. I mean, it could be hype cause I don't know the tech that well, but I feel like we reached a milestone when I saw that paper.
[00:28:12] Joe Bradley: No, I a hundred percent agree with you. So the automatic image captioning is an example. It's almost an anti example of like GPT in some ways, because what that's doing and it, and there's a long way to go there. I mean, you know, you're not gonna, it's not going to approximate, you know, really complex task. It's not gonna handle really complex tasks the way that like you and I, with the elec eloquence that you and I would or any human but, but it did something fundamentally different, right. Which is to do that research. And a lot of this came out of Google, you know, in, in sort of the image search space.
Like there's, I think a lot of Genesis there, but to do that work well, what had to happen is you had to develop a shared mathematical representation of the visual image and of the test. Right. And so the way you typically train models like this, I mean, it's been a while since I looked at this research, so this could have changed, but the last time I looked at it, which was several years ago, we typically train models like that. As you have, like, you know, two vectors, you know, one representing the, you know, the sort of vectorized form of the speech or the language when representing the vectorized form of the image. And then you're trying to optimize their inner product. Right. So they're getting closer and closer together.
[00:29:17] Jason Gauci: Yeah. What, just to double click on that, so we had an episode, our last episode was with um, the CEO of Pinecone and Pinecone is a vector similarity database. And we talked about how you can take sort of any classification problem or even in an unsupervised or self supervised way, you can kind of take a chunk of data and you can create a embedding or various, you know, relatively low dimensional representation of that. And what Joe's, here's talking about is, you know, you can actually do this with two different media and try to join them together. You know, when you've seen them together on the same website, or if you have a hand curated set and now you end up with two things that are projected into the same space, and then you can go backwards with one of them.
[00:30:09] Joe Bradley: Yeah, that's exactly right. Sorry for not giving a little more background there, but to me that's fascinating because what it's, it's this it's like a step in teaching a system to have a multimodal representation of. Right. So now it's now the image and the text are in a shared space, as you say, a shared mathematical space and a shared semantic space, which is how you and I work.
Right? Like we don't differentiate, you know, like the concept of red dress, you know, the words and the picture of that and a physical red dress. That's all, you know, those are all of a piece for us. Right? Those, all those are all related. Like obviously very strongly, it's just different representations of the same fundamental object. So I think that, and that's very much not what like GPT did and don't get me wrong. I'm not trying to talk trash about GPT. I think it's an amazing advancement. In some ways I think we should think of these big language models as like national resources that we're building. I mean, I literally take as much power.
Like you measure the power required to train really the thing, one of these things, like in units of Hoover dam, like that kind of that, like, that's actually a reasonable scale. Like it's I forget how many, like Hoover dam days it is to train GPT, but it's not like 0.005 or anything. It's like a real number.
[00:31:21] Jason Gauci: Yup. Yeah. And you can transfer learn off of it. So, it really is. I think that's a beautiful way of looking at it. It really is like a national treasure that we've all spent a ton of time and energy curating. And now everyone can benefit.
[00:31:36] Joe Bradley: Yeah. With a whole bunch of infrastructure required to do it, right? You can't go do that without the data centers. You can't go do that without, you know, industrial strength, power lines going everywhere. Right. And yada yada yada but GPT is different, right? It's what it's trained on is not, and it's not trying to find this like multimodal representation, right?
It's trained on basically most of the texts of the internet, right? So, so it's, you know, it's, it's a window. Into what us is a very, it's what for us is like a projection of reality on to text, right? That is GPTs reality, right?. That's what it knows, right? Which is why, you know, when we start thinking about like that, like the ways in which the model is amazing and the ways in which that was confusing, at least to me, started to make a little bit more sense, right? You can ask, so you think, all right, if I train this model and this, I asked you who the president of the United States is, and, you know, in 1815, it's going to give me a good answer because it's like, it's got all these good relations, it's got a bunch of texts to work with. It could tell it that answer.
I mean, obviously it's an amazing advance that we're able to synthesize that into a, you know, into a system that can make the inference. But then it's also like, if you go and ask it who was the president in 1705. Right before the country was instantiated. It'll give you a reasonable answer of a person who's kind of sorta presidential like Ben Franklin or whoever, you know, that's probably too early for Ben frequent but yeah, you can go in and sort of trick it with these questions and these premises, and it doesn't do a great job, you know, without some further prompting and some further help in like, understanding that, that question doesn't make sense in that context, because it's this really big and sophisticated association machine that doesn't have larger political context.
Like doesn't really understand what it means in the same way that you and I do for some of these historical events to have taken place. It's really doing much more associative work. And a lot of that does that thin layer of like, it's just on this, like if language has kind of a membrane of our reality, right. You know, it's stuck on that membrane. And it can't escape.
[00:33:41] Jason Gauci: Yup. Yup. Yeah. I think so. Correct me if I'm wrong. GPT-3 is trained such that. They take like the, a string of N words, and then they try and predict the next word, the end plus one word. And so what'd you end up with is now you can give GPT-3, a sentence or a paragraph, or what have you, it will generate the next word that it feels like belongs there.
[00:34:09] Joe Bradley: Well, you just described is fundamentally what a language model is. And GB three is a language model, right? So like mathematically the definition of a language model is like, given N words, predict the inputs first. And however you get there, however you want to do that. You'd that could be a set of rules in the background that could be, you know, whatever, like some algorithms got to do that job.
And then once you have the reason that's an important construct in the field is that once you have that, it unlocks all these other things, right? It's like, now I can ask questions. I can turn that prediction and the underlying representations of the text and the knowledge that fuel it into really, you know, all the many of the applications that we care about today. Right? So it's kind of a building block. And so GPT, you know, and you train these language models in different ways. Like you don't always know. You know, w specifically train it only for that, with that task, right? Like that is the mathematical sort of fundamental definition of what task a language model has to perform.
But there are a lot, there's lots of transfer learning and, and like different ways you get the model to be good at doing that. And then of course, on the other side, there's a bunch of applications you'll go in and you go and turn on for it. So, but a lot of, yeah, a lot of this sort of self supervised aspect of training language models is really about, you know, making a judications of how good or how well a word fits in its context.
[00:35:24] Jason Gauci: And the nice thing about something like GPT-3 is that although you have to use, you know, Hoover dam days worth of energy you don't have to do any manual labeling because it's self supervised, it's just scanning this data from the internet and it's trying to learn the rhythm of this data. And it can just see right away. Did I predict this word correctly or not? Without any human intervention?
[00:35:50] Joe Bradley: Yeah. No, that's why it's powerful. And that's also why it's limited. You, you, there's only, you know, there's only so much, you can learn that way. But there's a lot, you can learn that way. And as a baseline for learning other things it's kind the best one we've got right now.
That's why people are so excited about it. And it's pretty cool. I mean, we didn't have anything like this 15 years ago, and now we have models that can tell the stories that, that really are, you know, rapturous in their detail. Right. I think it was brilliant to bring out GPT-3 with the, you know, the article that it wrote about itself right.
Is a amazing way to illustrate that, Hey guys, we're onto something new here. Like this is where, you know, we're in Star Trek territory kinda sorta a little bit.
[00:36:32] Jason Gauci: Yeah. So, so bringing us back down to earth, in my opinion, I feel like there's two big pieces that are missing and there's probably a lot more, but I'd love to get your take on it.
I think one piece that's missing. Getting uncertainty estimations about from really any of these models. And so to your point, you know, to where they, can you say, what is the American president in 59 BC? The model should just say, well, like I'm not confident about anything here, right? And then the second point is having some kind of symbolic understanding to where, you know, it can understand, you know, module in a modular way can understand and compartmentalize, you know, America as a concept and when that concept started and you can use like some first order predicate logic to say, well, you know, the question is invalid because of these sort of symbols. I think that we've gone to this sort of, this, this embedding soup. And because of that, we've lost the ability. To think about things and reason about things in like a logical and methodical search kind of based way. And I feel like those are the two, at least in my opinion, the two big missing pieces.
[00:37:46] Joe Bradley: Yeah. They're both really interesting. And they're great points. Let me try and like at least take on one of them. Maybe we'll get to both. Cause, cause I love to talk about both. So as far as like the logic point that you make, I, so look, I agree with both of your points, first of all.
I think those things are missing. I think the logical element or the higher order logic we don't know how to teach computers to do well. I personally don't think the answer is. At least not in the, in as flexible way as you and I do it w when we sort think through the universe as we do, I don't think the answer is to construct a kind of symbology of it.
Right. That that, like, then these inferences are gonna like be mapped into. And then there'll be some, like some kind of logical computation that happens on the symbology. And then that's going to go back to the embedding space and like give it a different, I don't think I actually don't think that'll work.
I mean, I could be wrong. What do I know? I'm just, just one, one guy. But the reason I think that space, like that space of like, what is, what are the constructs and what are the abstractions that we use to make decisions at the higher order? Logic is itself. You just as complicated as the reality is trying to simplify it right.
In terms of like the relations between the constructs and the boundaries, a lot of things are complicated because the boundaries around what is this construct versus, you know, not like what's in the set and what's not in a set of just hard questions. And so I think in the end, we're going to have to find a way to teach a machine to construct representations like that for itself.
Right. But it's, but there is going to have to be some notion or some parallel process that serves the same function of the logical hierarchies that you and I would use. And like, you know, we teach humans headed to that. Right. And, and sometimes, you know, when you get into like deep political disagreements, like sometimes at the root of that is just like, uh, you know, kind of a misalignment of some of these definitions of categories.
[00:39:40] Jason Gauci: I think uh, just a riff on that. Like a one thing that. I think paves the way is the MuZero. I don't know if you've been following MuZero from DeepMind but yeah DeepMind uh, you know, their claim to fame or their initial claim to fame was beating the world master at Go. So, so Go is this board game.
It's just a lot more complicated in terms of branching and other stuff than chess. And they were able to beat Lee Sedol and, and now a computer is the world Go champion, but they didn't stop there. The next thing they did is they built something called AlphaZero and AlphaZero removed all of the Gospecific logic and it treated as just a field of, now they did have to do some feature engineering, right?
So you can imagine you Go has just white and black stones. Chess doesn't. Chess has a whole bunch of variety and. You have to do some proper feature engineering. But you know, beyond that, the rules of go and all these tricks, we're not in alpha zero, and it was able to perform just as well as alpha go.
And but you know, what they did have written in hand by hand is the, the game tree. So when they're doing this game tree search, there's still, the mechanics of go are built into that. So, so in other words, alpha zero will say, and this now it's alpha zero. So it works on many games, alpha, zero will say, take this, move this, go, move this checkers, move this chess move.
And then that would go into some program that a person wrote that would make the move, you know, adjust the game appropriately, tell them if they want or not. And then um, and then it would go and then it would and then that would conclude a simulation. So they would simulate using the hand coded engine.
And after so many simulations and they have a good action, right? So then they took it to the next level with MuZero, where in MuZero. They don't even use the rules of the game. And so actually in the neural network, it has to represent the rules of the game. And it actually has to do the simulation in the neural net.
I think they're using like an LSTM or a transformer or something. And so literally they've gotten to the point now where you give it a go board and it thinks for a while, and it does all these simulations and it just comes back and it says, you know, place the stone here. And I feel like that they're starting to unpack that planning and that reasoning and that symbology it's totally uninterpretable to us. But they're starting to unpack that process, which I think is really exciting.
[00:42:28] Joe Bradley: Yeah. Mean, it's if you had a child, right. And you ask them to, like, you never told them how to play the game, but you just let them watch, you know, a bunch of people play. Like, would they like how well would they do and understanding the rules and could they construct that abstraction?
That's probably like, you know, easier for some people harder for others but fundamentally possible. Right. Like we don't, we wouldn't see that as beyond us. I think it's, I think it's interesting. Yeah. I think you're right. I think that is like, like I liked the way you put that kind of like, what's fundamental about that advance.
I think what what it's important to remember too, is that playing field, so to speak or that area of, you know, board games like that is so ridiculous. Simpler then, you know, that then any sort of knowledge or human conversation or language-based, you know, kind of real application that we have.
I used to give these talks where I would talk about like, okay, you know, everybody wants, everybody's kind of like first, it was learning a lot about machine learning and I'm not trying to put you in this box. This is, you just reference something in my mind, but everybody who's trying to first learn or who kind of, sort of first learns about how to get machine learning done in one CA context rules into the conversation, a conversational context.
And they have kind of the same idea. Right. Which is they're like, cool. I just, I need a feedback loop here. Right? Like I just need, what I need is like, somebody's, you know, person's talking to the computer and like good thing happens or bad thing happens at the end. And then I, and as soon as I have that, I'm going to like, kind of do the self optimization thing because that's what machine, that's how machine learning works.
And there's a reason those systems work like. You know, in the world, right. There's a real, like, there, there are pieces of systems that work like that in real applications but there is no like end-to-end train dialogue machine that's, you know, that, that means anything to any person who's used for real applications that is in that closed loop form.
And it fundamentally comes down to how complex the space of language and dialogues really is. And even, even though the space of go in a metaphor, a mathematical perspective is vastly complex, right? And to, to the whatever, 113 or 111, or how many points there are on the board, you know, kind of combinations of stones and all this stuff.
Like those are huge numbers. But when you start with language, you begin with what is literally mathematically an infinite dimensional vector space.
[00:44:53] Jason Gauci: Yup. Yup. The other part of it is there's no ambiguity and. I mean, you place the stone and the same thing happens every single time. And there's no room for interpretation, but for language it's the exact opposite. I mean, you couldn't build a language that didn't rely on assumptions.
[00:45:11] Joe Bradley: Mean, if you do, you kind of end up with something like a computer language, right? Yeah. That's right. You sort of, that's sort of where that goes in a lot of ways. And I mean, those are obviously meaningful and interesting, but like, you know, nobody talks in Python for a reason.
[00:45:23] Jason Gauci: Right. Except Patrick, but nobody else.
[00:45:27] Joe Bradley: Yeah, that's actually interesting. Cause it's another thing GPT is really good at, it's really good at writing computer code out of language. Which I think is a fascinating,
[00:45:36] Jason Gauci: Oh right. They call it, they're calling it codex?
[00:45:40] Joe Bradley: Yeah. I there's like a few different ways. This has come to life.
So, you know, there's a bunch of people that have done. Like you can go like poke through Twitter and find all sorts of good examples of like, you can literally build a front end in some cases in, you know, like by telling the machine what you want the webpage to look like, oh,
[00:45:59] Jason Gauci: I've seen those demos they're amazing.
[00:46:01] Joe Bradley: But to your other point, right? So sorry, we kind of meandered a bit, but to your other point around uncertainty, I also really agree with that, I think, and I think it's a big problem in conversational AI, right? Like you have these models that are built on, you know, especially natural language understanding that are built on these transformers and these embedded representations.
And they're really good in a lot of ways. They're very smart but they, they can make really dumb mistakes still, right. That's not beyond them. And so most industrial strength models, most real systems have some, you know, sort of are forced to have some combination of rules and backstops against these neural network approaches.
And I think a lot of what's under that are a lot of what's missing is, you know, is there a system around that has a good idea about how much, how good the system, how good this natural language understanding is. Is likely to be at this problem and my personal opinion. And some of the research work that we're doing is that you actually need that system to be fairly decoupled from the system that is whose job it is to make the prediction in the first place.
I think you can't have zero coupling. And the art of it is kind of like in what way is let's say the natural language, understanding understander decoupled. And in what ways is it coupled to the system itself? Like those are, that's kind of like the hairy edge of this problem, but I think we haven't done a great job at that yet.
I think there's a lot of research, not we personally, but like we as a culture and I think there's a lot of research still to be done to, and I think it'll be important. Cause I think it'll foundation realize some of the other problems that you talked about. Right? So if we want to start building a better hierarchy of understanding for some of these models, like a step on the way there is to ask like, well, when is it wrong?
And have a separate opinion about when these models are wrong, which can help us develop separate, like an understanding of the categories of types of times when these models are wrong. So you can start to imagine an interplay, right? If I separate that system enough, it's going to begin to categorize, you know, a whole slew of cases where this thing is wrong, that it doesn't know about.
And then those slew of cases become a basis for an abstraction of, okay, what's a knowledge center, you know, that, this thing where this thing is weak.
[00:48:20] Jason Gauci: Yup. Yeah, totally. Yeah. I think just as a field handling uncertainty is very, very difficult right now with deep networks. I mean, there's a, and this is, you know, this is for something simple, like a supervised model, let's say the model that predicts cats and dogs or something.
I mean, it's, there's, there's still not any consensus. I mean, there's, there's effectively two or three camps. There's one camp that says, well, train, you know, 10 different models. Either on 10, one 10th of the data or you shuffle the data for each model and then now you'll get 10 different answers.
And so, you know, depending on how much those answers vary, you can say how confident you are. There's another camp that is effectively doing the same thing, but within the model. So, you know, you kind of multiply the layers and now you have this distribution and then there's this camp that says let's put priors on everything and use kind of bayesian approaches.
And none of them work very well. I mean, the reason why we don't have a, the reason why you can't just go on PItorch.org or tensorflow.com or whatever and just get a model with uncertainty is because there's not really a method that satisfies it very well.
[00:49:40] Joe Bradley: There's a lot of reasons for that. But I think one of the most important ones is it sort of broadly goes under the header of like Lang like processes that generate language and how varied they are.
Right. And so I think, you know, this is one of the things we see in an industrial context all the time is that you can have models that are great in doing well, you know, based on real life training data from real life, people that really live, talk to the model and that you have now learned to to correctly predict and then think, you know, something very subtle about the underlying environment can change.
You know, people can start talking about the business promotion in a different way because it's, you know, it's now there's an advertisement that describes in a different way. And all of a sudden, some of these things really start to fall apart. This is one of the areas that for me, that, that illustrates, like why all you kind of isn't where we think it should be or where we think it is.
And I think that, you know, mathematically what's going on into that is that, you know, there's the generating function for language and the infinite dimensionality of this vector space. Right. Or, you know, allow for. They make it very hard to train models that are really rooted in the semantics as much as we'd like and probably, you know, in the same way that, you know, GPT is existing on this membrane and, you know, really only seeing what it can see and missing these big pieces of reality and not having some of this like base symbolism to attach to you know, the models that are doing the work of conversational AI today are still a little bit, you know, they can be infected with problems like that, and they can be confused by what we call lexical cues more so than we'd like, right.
So electrical cue for those who don't know is just like something more on the surface of language about the word choice and the top level design of the words in the sentence versus Symantec, which is like, okay, what's that deeper? You know, agreed upon meaning.
[00:51:27] Jason Gauci: Yeah. I mean, one thing, one really prime example of that is cats and dogs.
Right? I think that we always hear that analogy. Like it's raining cats and dogs, or this person's a dog lover, this person's a cat lover. And so our mental model. Yeah. As a society is that cats and dogs are opposite. And all of us, as a child grew up watching movies where cats and dogs didn't like each other.
Right. I mean, that's a trope, right? And so in our mind, cats and dogs are really far apart, but if you were to do use GPT-3, it's going to put them right next to each other. And the reason is websites that talk about cats. A lot of them talk about dogs too. I mean, imagine a website that sells pet food, right?
I mean, they're going to be talking about cats and dogs at the same time. And so actually, when you look at anything on supervise, it's going to put cats and dogs very close together, especially in the grand scheme of anything that could be talked about, but we have this sort of a cultural separation and those kinds of things was just very hard to put into the model in any meaningful way.
[00:52:29] Joe Bradley: Yeah. I mean, I think what you hope is that. Scenarios like that will be accomplished via like the high dimensionality of the embedding space. Right? Like you can learn about some of that because on some dimensions and high dimensionality is like really hard to visualize as I'm sure, you know, and really hard to have intuition about because the distance metrics and all this starts to be really weird, but you would hope that there'd be some dimensionality where there's like this separation.
Right. Which in our minds would correspond to like a character of the animal or something like that rather than you know, a functional view of the system. Right. Cause when we go, cause we're also not surprised when we go into a pet store and we see cat food next to dog food. Right. Like that doesn't, you know, that doesn't make us say like, what's wrong with this world.
Right. And you know, that's so, so we can, you know, we have those like separate dimensions or something. Yeah. So I think I would hope that the representations can accommodate that. But I think the challenge to go back to the earlier point is. You know, what are those underlying dimensions? They're not stable, right?
Like if I slant a concept like slightly differently, we might read dimensionalize that space a little bit differently. And so we sort of make them up on the fly. And a lot of ways, obviously there's some touchstones but there's sort of something that we create, you know, dynamically, like how do we interact with models at that level, you know, through the training process or through the, know, even a discursive process in dealing with them, which is another thing I really like about GPT-3 is that you now sort of talk to the model by giving it examples and can kind of train it with real language that's also an advanced, I think is very important.
And we're going to need to kind of cope with that and figure out how we use it because ultimately. The better these things are at creating abstractions and analyzing, you know, what they're talking about in the ways that we're kind of discussing, you know, the more we're going to need to converse with them to make sure we understand how their minds are working in the same way that when you and I talked to each other, like we got to get to a baseline of LA, who is he?
You know, what does easy actually a dog hater? Like I have to kind of figure that out first to talk to you appropriately about dogs or something.
[00:54:41] Jason Gauci: Yeah. I wonder if someone, you know, so if open AI is going to train a new GPT model, I wonder if they started with books for babies and, you know, the random network was biased in favor of babies books, and then they literally trained it on books meant for older and older people.
And so it kind of followed the same path as a human in terms of what kind of material they consume. I wonder like, That model would presuppose. I feel like there's something maybe really interesting now. Yeah.
[00:55:16] Joe Bradley: Yeah. I agree. I think you quickly end up in the world of AGI, right? It's artificial general intelligence and that definitely by no means an expert there, but I think that's kind of where a lot of this stuff goes in the end is you're like, well, I have to start teaching these things more like they're a child and I have to start relying.
And again, this is why I'm certainly so important to have to start relying on their own understanding of themselves and their ability to express their understanding of themselves in order to influence them in the ways that I want to and have them take right. Actions. And so you see it sort of quickly starts to, if you really break it down, I think he started thinking like, wait a second, I've got to imagine this as a more like a human thing that I'm teaching and less like a mathematical process that I'm training, but obviously the math doesn't go away.
Right,
[00:56:04] Jason Gauci: right. So we'll dive into a little bit on. On transformers maybe. And then we can back out and look at the whole problem again. So if I remember correctly, my knowledge is very limited, but transformer, somehow it takes in, it can take in an arbitrary number of words, and then it creates some embedded space of that that arbitrary, that variable length you know, paragraph, I guess.
And now then from that embedded. So that's the encoder. And then from that embedded space, you have a decoder that can emit words and then also emit a transformation to this space. So imagine like you have this embedding that contains all these things that you want to say. And as you say, those things, you move to different points of the embedding, where now there's less to say, and eventually you emit some special token that says.
Is that, I mean, that kind of get that right? Or how to transformers work.
[00:57:08] Joe Bradley: Yeah. I mean, I think, you know, it's an area that's live, right. There's definitely a lot going on there. So I guess like w what I would reference, I think for folks, like one thing that I think is important is this notion of attention, right?
So one of the hard problems in language for a long time is. Like how well can a model like reference across a great degree of like space between concepts, right? And so when you look at a problem, like trying to think like a sequence problem, like a time series sequence, right? You'll, you know, you, one of the longstanding problems and time series analysis is like, well, what if I have an effect like last January and now it affects this March or something.
Right. And it's a macroeconomic underlying effect. And so there's been all this work in sort of hand curated feature building in those contexts, which are, these are sequence models as well. So they're fundamentally that, you know, kind of like mathematically the same type of object that you have when you analyze language.
You know, I use a lot of bespoke techniques to do that in time series analysis and you know, all the ARIMA models and just. Tons and tons of work. And I think the same thing was done for many years, you know, in a hand curated way when looking at language. And so I think one of the, one of the advances that's pretty important in this context is that, you know, we've begun to train these models and these transformer based systems in a way that they can locate those long associations.
Right. And in a way that they can provide for us you know, this is kind of another way to talk to the model, right? They can provide for us connections between them. Right. You can ask, well, what other words in the sentence does this word relate to? And in industrial comp in an industrial context, that's super helpful because as a way of tuning and debugging and, you know, trying to improve systems, right?
The, those sorta like co reference patterns, and I don't mean code reference in the literal sense. You know, could reference of language but, but these patterns of correlations, so these patterns of mutual interests or mutual effect become ways to, to find, you know, to go and kind of figure it out.
The misunderstandings that the models are having and try to improve them.
[00:59:26] Jason Gauci: Got it. Okay, cool. That makes sense. Yeah, I think I definitely have to get ramped up on it. You know, my background's in reinforcement learning and, and I saw a paper recently about someone using transformers to to do sequential decision making.
And so transformers are kind of coming into like a battering Ram just coming into my field and so many other fields. And so I have to definitely, I've had to do, to get ramped up on them. But yeah, I think that I've had seen the word attention a lot and yeah, maybe we'll we'll maybe we'll get you back on and we could do a whole show on transformers or we could get somehow get ramped up on that.
Yeah. Let's let's pivot to LivePerson. So tell us about. About LivePerson and what you all do and what kind of services or products.
[01:00:15] Joe Bradley: Yeah, so, all right, LivePerson, you know, fundamentally what we're trying to do, we have some new things we're trying to do as well in terms of relating directly to customers, but the core business aLivePerson for the last 20 years or so has been about, you know, essentially making the connection between us, the people in the world and the brands we have to deal with, or want to deal with a little bit better and making our lives a little easier because of it.
And so that began. Back in the day with online chat. Right. Which is sort of almost a dirty word now. And I think we actually would say at LivePerson that, you know, we thought it was a kind of a dirtier word first, right? Not that online chat doesn't have useful useful applications. And we still do a lot of online chat, but we don't see it as the future of, you know, like, Hey, I log into a website and I have a, like a sort of not persistent or like an illimitable connection with with an agent on the other side.
And I'm chatting with them for five minutes. And then, you know, if I break the connection and it's gone and I have to start again, we don't see that as like a model for like a real, for like a really good customer experience or a really good brand experience for that matter. So, but that is kind of nonetheless like how the company was built because that model does have some advantages over just the straight phone model.
That was what was there before. But as, as time has gone on, right and about, I guess it's about four years ago now we, five white person went early, you know, into a messaging context and what we call more asynchronous communication. Right? So now you've got brands and, and consumers communicating through you know, SMS messages through WhatsApp, through line in Japan, through whatever, right.
There's a sort of umpteen messaging platforms out there. One of the things we do is we make sure that, you know, you can have a, you can hold a great conversation with your customers on these platforms, but where that obviously starts to lead. And it's really interesting when you look at the conversations people want to have and what it means to shop that way, compared to what it means to shop You know, in kind of the web signature way that we shop today, or, you know, the website context, I mean the same way that we shop today it's fascinating.
Right? One of the, one of the things, you know, the jokes I sometimes make is like one of my interview questions at Amazon was like, all right, well, help me build a model about you know, whether or not the shopper's about to leave the website or help me build the model is whether or not this is a gift shopper, how would you build that?
Right. And these are things that people just tell you in a conversational context, because they're trying to get help. Right? So one of my favorites is a woman talking to a sporting goods company and she's. Hey, I I've late gift shopping for my 12 grandkids and my sixth grade grandkids. Like, can you please help me find some gifts for them?
Right. You know, literally like do the kinds of things that accompany, like an Amazon would be like, you know, sifting through web search and web activity history to try and make an inference about, or just things that people need to the people they want to let you know that this is their problem.
They want you to give them help in solving it. Know, they she wanted, she was very interested in the fact that this, know, this service agent on the end of the line could, you know, next year reach out ahead of time with some proactive offers, right? Like all this stuff that like, you know, can kind of feel seedy if a company's doing it behind the scenes on a website, but feels very natural.
If you're the one telling the company, this is my problem. And I'd like you to help me. Yeah. That makes. Yeah. So there's some, like when I kind of drink the Kool-Aid on this problem, and sometimes I drink the Kool-Aid. I feel like what we're building is a way to do a more open and transparent and a little bit better informer, conversational experience for shopping, which takes you back to the world that we all used to live in kind of before the internet, where most of shopping was like that.
[01:04:03] Jason Gauci: Yeah. I think that to your point, I mean, I've, I've recently been buying all my clothes online since COVID you know, once you've narrowed down the size and, you know, almost everything comes in almost every size. So narrowing down the size doesn't really do anything. W you know, it's almost like, like, how do you sort of search for your aesthetic?
It's really difficult. Yeah. But I think that if it was sequential, like maybe I would search for blue shirt and I would look at what came back and I would realize, okay, really what I need is a blue striped shirt. And somehow knowing that this person pivoted to blue Stripe shirt that now it's like, okay, you have these two things and maybe you would show a striped shirt.
That's native. 'cause, you know, that strike part is really important. And so just, yeah, just handling that modality, I think is a conversation. And whether you're doing it through a search engine or you're doing it with text generation, I think that's something that has to be addressed.
[01:05:00] Joe Bradley: And if you think about it, like what you really want to understand, if you're in that conversation with someone and trying to help them buy a shirt is something a little more fundamental about what they're doing, right?
Like, are we on, are we in a situation here if I don't know how you shop. You know, I might be in a situation where I've like, I've already had three copies of this basic shirt and I'm trying to find it a different color. And I know it fits well. And I kinda just want to get moving and like go get myself a fourth one because I love these shirts.
Like it's a pretty uncomplicated mission. And what I want in that situation is someone to show me, you know, all the different copies of that particular item. And then maybe, you know, I want a little back and forth on the style I'm looking for and I'm done, you know, or I may be like, have been like poking around again.
And I'm like, oh, you know, I need a little bit of a fashion change. Like, I feel like I'm kind of boring right now. And I'm looking for new ideas and I, you know, I'm looking for a shirt that like, you know, maybe I wouldn't have worn before and I might need a little, like, I may need a really different experience.
[01:05:53] Jason Gauci: Maybe teaching seventh grade and I need a periodic table. That's somehow also like WWF.
[01:05:59] Joe Bradley: Right, right. I need the SmackDown. Like I need to turn it over. And then, you know, macho man Randy Savage can jump out of it or something.
[01:06:06] Jason Gauci: Like a hydrogen atom and just crush it. Yeah,
[01:06:09] Joe Bradley: Exactly. So like, I think you want those. You know, you want those missions under, this is a lot of like where the three-line for.
At least my career started sort of matches between Amazon and Nike LivePerson. It's like, what are those missions and how do you define them? And I think the language context is is obviously the best context to define them in a lot of what we build is about allowing brands to sense that and to identify those missions from the language that there's customers give them, so they can be more helpful.
And so this, when we talk about conversational AI, or at least the tools of conversational AI and how we use them at LivePerson, you know, there's at least kind of two big categories, right? One category is around of course, like helping brands build these systems so that they can have automated ways for customers to solve their problems so that it's easier and faster and better for everyone.
And then the other side is like, okay, well, how do you. Like, are you set up to really listen to your customers? You know, do you know for instance that when, you know, your customer complains about having tried to call you, having tried to reach on your website and having tried to, you know, to text message with you by the time they're at that point, they've got like an 80% chance that they're going to leave you because this canonical problem that like, they're just frankly, makes everybody mad.
And, and so being able to listen to stuff like that at scale, right. And understand that a scale so that you can provide a better customer experience. That's another sort of layer of the conversational AI offering. That's we think is really important for us and really important for brands because in the end, like, you know, consumers in America and other places now we're pretty fickle beasts, right?
Like we've been kind of trained by like the Amazons of the world and other companies that have pioneered in customer service. You know, to create like these really positive service experiences for us, that they're really different than 30 years ago. Right. But now that's the norm. Like, and if you want to keep building your customer base.
If you don't have a captive audience, if you're not like a cable company or something where like, nobody can leave you, then you have to be great at this now. And you, you have to kind of blur that line between what are you know, how are you solving their problems? And then how does that translate into future growth between you and them and building that strong relationship?
So, so we kind of build products on both sides of that, right? We build natural language, understanding with both of those use cases in mind, for instance,
[01:08:31] Jason Gauci: it sounds like another part on the product side that I think would be really tricky. It would be knowing when to hand off to a real person that seems like something that a lot of companies, you know, wouldn't know how to do.
They would really rely on you.
[01:08:45] Joe Bradley: Yeah, that's right. And I think like that's actually like a nook in a much bigger problem, which is how do I understand how well the system is? Oh with my customers. How do I understand the quality of these conversations between the computer and the person? And there is not good tooling for that.
This is one of these like foundational areas where I think, you know, you just don't find anything good in the industry right now, because it's a, it's one of these annoyingly hard problems. Right. And it's kind of related to all the abstract stuff we've been talking about. Like, when does a model know it's doing well?
When does it not? And like how deep and understanding and how introspective is that model in the first place. And the answer is typically not very so, so you can't rely on the model to do it. You need to kind of build some of these X, you know, separate systems to be able to see this. And some of that requires like really rolling up your sleeve and sleeves and doing some dirty work and being like, what are like the top 10, you know, what is the right way to describe like the top 10 kinds of problems that people face when they're trying to talk to automated systems and how they break down and how that customer experience goes wrong.
So we spent a lot of time on that. We have products, you know, that we're bringing to market right now, in fact, on measurement, you know, the, this get this, like the product has this pity name of max, right. Which is the meaningful, automated connection score, which is really about like, Hey how good or bad was your CX in these automated conversations?
And like, where should you improve it? So, and it's important for two reasons, right? You need to know how well you're doing in order to have any sensible business strategy and to tuning and optimizing these AI systems is like really hard. It shouldn't, it's really hard. It's a lot more of work and ongoing work than people at first, often conceptualize.
Right. You got to think of it. Like if you're building a website, you don't build it on Thursday, send it live and then don't touch it for a year. Right? Like you would never do that. But a lot of people think about conversational. I'll build the bot. I won't touch it. It's done, but it's really, it's an ongoing process iteration and we need better tools to in the same way we do website AB testing and the same way that we have a bunch of tagging and, you know, and, and kind of infrastructure and software built up around learning.
What's working about your web presence and improving it. We need the same thing. You need the same things. If you're gonna learn about how to improve your conversational AI. And that's part of what this metric and the system is about is helping you quickly locate places where the conversations got wrong.
Where did the bucket stuck in a circle, which drives us all crazy. Where did the NLU just like, totally get, you know, just like totally barf and lose it. And like, and just like completely break the customer trust. Right? So we were like, are. In general has really been about quality. And one of the, one of the, obviously one of the most critical pieces of quality it's like, do you really have good measurement?
Do you have good understanding about what quality is? And do you have good pointers as to where quality went wrong?
[01:11:37] Jason Gauci: Yeah, I mean, this is a huge problem. I think the people who if someone can even crack this in a generic way, it would be, it would move the entire industry a mile forward because I, I think you know, we have all the same issues where it's just so hard to triage.
One of these situations, what we end up doing, you're probably doing something similar is just coming up with so many metrics on, you know, for example, how many people are clicking on this? Like, what's the click rate? What's, let's use the Amazon example. What's the click rate. What's the conversion rate.
How many people abandoned the whole site? Now many people bounce, all of that stuff. And then you have this other, you have this downstream problem where as soon as you define a metric. You create an incentive. And so people you know, machine learning engineers at your company are going to try to drive that metric up.
And so typically what happens is eventually something that's not being recorded will start to suffer. Right. And so now you have to make a metric to track that. And so you ended up like just adding more and more balls in the air and it's because you're in this arms race with yourself. Right. And so, yeah, the it's just so, so difficult to, to, to go through that process a
[01:12:57] Joe Bradley: little bit of like a metrics, alphabet soup, in some ways, one of the things we did to deal with that, the first thing we did before we started kind of working on this max project was we sat down and said like, well, we need to organize.
Like the system, like, we need to organize the metrics somehow because you know, people are doing exactly what you're saying. Like a conversation. I probably had five times now I'll go east before the pandemic. I would go physically to brands all the time. And now I talk to them on the phone. But they, you know, I sit down and be like, okay, well, how's it going?
I like hung out. We built the spot, you know, it's whatever platform they built it on. And they're like, it's great. It's kind of 80% containment rate. It's awesome. I'm like, that's great. Like, what are you doing with the 80%? Extra capacity you now have in the human context center, like what, like, what did you choose to do with all those people?
You must have like people at their desks with nothing to do. Right. And the answer is never like, oh yeah. Like here's what we did with them. The answer is always like, well, no, actually volumes in the human side have actually gone up or they, they basically stayed the same but the boss do it.
Great. Right. And it's because it's fundamentally because this concept of a containment is just a very broken way to think about whether or not the bot serves, serve the person in the way that it needed, you know, and solved the problem or not. So one of the, like I said, one of the things we did was we sat down.
We said, well, let's organize the metrics into. You know, some conceptual framework that people can use so that they can see. Cause we're not going to get out of this. Like there's not going to be one magic number that does everything. This is actually a highly multi-variate problem. There's lots of different ways to think about what good means.
[01:14:24] Jason Gauci: Adapting. As soon as you make a metric, someone will try to game it and then they'll cause another problem.
[01:14:31] Joe Bradley: Right. Right. So you can't like kind of live on that one. It's just it's too unstable. Yep. So, so we built this framework, you know, we, we gave it the pithy name of the four E's right. And there's an efficiency vector of metrics.
There's a, you know, effectiveness in solving the customer problem, vector of metrics. There's an emotion, right? What's the customer's emotional response. And now I'm gonna like totally blank on the fourth year, which is gonna make me super embarrassed. It'll come to me in that. So we've we split into these categories and under them, we put both sort existing context center metrics that people are used to like repeat contact rate within an hour, within three hours within a day.
And then some of these newer metrics like, Hey, what's the max CX on this? Or, you know, what's the sentiment of the customer. And you know, as measured by these machine learning models right now, you can begin to give brands a sort of reasonable target to optimize for that makes conceptual sense to them.
The fourth thing, by the way, was effort, right? How much work did it take to solve this problem? And that's really, these are like much more connected to things that we care about as customers. Like when I tried to contact you, did it work or not? That's effectiveness, like how much of a pain and frustrating was it that's emotion?
How much work did I actually have to go through to do it? That's my effort. Right? So those correlate and these concepts, I think correlate a lot better to what a great CX is. So we spent a lot of time on that. And then that set up again, like some of the max work in some of the analytics work that, that that we've done subsequently.
[01:15:57] Jason Gauci: Very cool. Very cool. So what is something that makes, and I know with COVID, we're all working from home. And so everyone's a job looks pretty much the same. There's there's kids running around and there's a dog and there's, you know, but like belly before COVID, or maybe projecting into the future, what about LivePerson is really unique or you kind of walk into the office or maybe you're talking to the folks and you feel like this is something I've really seen anywhere else.
[01:16:26] Joe Bradley: Yeah. You mean like in terms of being inside the company or in terms of what the product offers?
[01:16:30] Jason Gauci: It could be, I know this is inside the company, so it could be, you know, everyone plays ping pong on Thursdays, or it could be something about the nature of the, kind of the kind of ethos that you've built, something like that.
[01:16:43] Joe Bradley: Yeah. Yeah. So for me, I mean, I don't know. You'd probably have to bring people on, you know, on the, like, on the team to make sure I'm not lying to you, but I can tell you about some of the things we strive to do. And some of the things that I see as successful at, I don't think it's easy. I guess, let me talk about the science domain.
Like, I don't think it's easy to have to combine two critical concepts that are important for real success in like scientific research and product development. And those two concepts are one like the truth seeking you know, are you really satisfied with what you've done or could it be better, you know, or what's really wrong here, those kinds of questions.
And then two like, a spirit of real collaboration and support. All right. So often I like when I was a physicist, right? Like the way that physics solves this problem is by not solving it, they just let, like, they just go after option one. And, you know, it's, it's not at all uncommon to, for the chief like discourse between two physicists to begin with, like, you know, for somebody to start out with.
Oh, that's great. What you said, but let me tell you all the ways in which you are an idiot, right? Like literally, you know, the.
[01:17:48] Jason Gauci: We have a term for that. We call that the intelligent jerk. Yeah. It's a, it's an archetype, an anti hero of the tech world.
[01:17:57] Joe Bradley: Yeah. Yeah. And I mean, it's very common it's and like, there's a sense to it logically, like if you don't have some of that disagreeableness and I, and I mean, not like literally in the psychological sense, you know, you run into like too much conformity and you don't do great work.
Right? Like you can't actually do well without some degree of that. To find a group of people that can marry that and can have that same impulse and understand how to express it in a way that is constructive, right. Rather than destructive and like builds partnership rather than tearing it down. Those are special people.
And we work very hard to, you know, to really staff ourselves with that combination of skills, like up and down the science, Oregon. I'm pretty pleased with how we've done. I think it's one of my it's without a doubt, my favorite place to go to work and do science work and learn about, you know, research that I've ever had.
And I have good situations, Amazon, a good situation at Nike. I'm not saying they were bad that they were great, but this, I think we've done something special by prioritizing that combination. We have not hired great people that we felt like just weren't going to hit dimension two very well.
[01:19:05] Jason Gauci: Yep. Yeah.
It's a really good point because the other thing too is people who are really good. You know, and have the right, you know, emotional quotient and, and, and have that ethic and have all those pieces. They're going to grow really quickly. And people who go quickly, you know, I don't know what data there is on us but people go quickly we'll maybe want to change teams because there's an opportunity somewhere else or change companies or something like that.
But people who are in over their head or who don't have the right character, those tend to be the people who stay forever. And then, so it's like, it's so difficult to, I mean, so much time needs to be spent to get that right. Because yeah, otherwise, you know, there could be whole years where there's problems that, that, that take a very long time and.
[01:19:59] Joe Bradley: Yeah, I think there's some dynamics like you're talking about. They're very real, you know, I felt like, I think one, you know, one personal antidote to at least the first part. I don't have a good antidote to the second one, but I think one thing we're also really fortunate about at LivePerson is we have pretty amazing data and capabilities to work with.
Right? There's hundreds of millions of goal-oriented dialogues coming through the platform every day. Obviously we use those thoughtfully carefully and respecting all the contractual arrangements we make. We don't use all those conversations the same way, but different brands have different expectations about how we use that data.
But nonetheless, like there's lots and lots of amazing conversations to work with. And there's also tens of thousands of people sitting on the platform who can inform, you know, conversational AI with like with their opinions and their expert opinions. Right? All these agents sitting there able to give feedback to the systems and tell them when they're working well and when they're not.
And so some of those tools, I think like fueled. You know, much more so than anything that I, or the other managers on the team do every day fuel, like people coming in and really kind of hunkering down with LivePerson and doing good work for a long time. So it's been nice to experience that, but yeah, in general, I agree with your assessment of the dynamics.
Like you get somebody in, you know, who's struggling on one of these two dimensions of work and it's hard for them to leave because it's like there's a lot more risks.
[01:21:18] Jason Gauci: Yup. Yup. So, so for folks who are listening to this and completely enamored, they want to do conversational AI. They want to get into this field.
It says, LivePerson have any openings. And can you kind of break it down into, you know, just live present of internships. Do you have full-time positions? And if there are, you know, post COVID, you know, if there are a geographic locations you want to focus on what are.
[01:21:46] Joe Bradley: Sure. So the first answer, the first thing to say is we're hiring a lot.
Like the company has grown, you go read our public record reports. We are a public company, as you can see how we're doing and we're growing rapidly. So we have a lot to do, you know, this year I'm really trying to focus on building an exceptional scientific research team is. Farther a field in time. Right?
So, so we've typically had at LivePerson a strategy of, well, like we want to work on research. We want to work on science, but we want it to be in the context of being productizable in the near to medium term. So really not much more than six months out. We want to see this come to life in a way that impacts the product.
And I think that was right for where we were and all the stuff we needed to build. There's a lot of obvious stuff to do, but now we're in a different place. And I think we want to be looking on 1, 2, 3, 4 year timeframes for how some of this stuff turns into technology that we can use. And, and we want to go a little deeper with academic partnerships as well.
So, so, you know, I'm staffing just a full team on research with kind of that mandate, looking for a lead for that. Who is, who wants to come in and say, okay, I see this data, you got to understand, you guys have a basic dialogue problem you're trying to solve. And I want to go push science research in this direction and partnership with this, you know, academic institution over the next three, four years.
And here's the agenda and is going to help drive that. And I mean, I have some ideas about where that should go, but I'm really looking for a very senior lead to come in and, and you know, take the reins on that. So we're actively recruiting there and they of course will actively recruit. For the team, you know, that, that works with them.
But we have other jobs and other opportunities as well. Like for instance, an engineering company, you know, at my team, know, I hire analysts as well. We do, like, I run a lot of the analytics products for LivePerson and we hire, of course, machine learning engineers. We hired people to build backend systems that are supporting all the model building and the model training and the model management.
And, you know, we're doing a bunch of work of migrating pieces of the platform into the cloud as this there's a lot of like real engineering work. So, so one vector that often works well is for, uh, if someone has a strong engineering background, And they're interested in this space, right? There is a you know, there's definitely work to do to become an expert in the science and the field.
But there are ways in, I think to the there's a lot of engineering work that provides like uh, boots on the ground, it kind of introduction to the technology stack and kind of gets you, and then you kind of get into the flow or we're now like, okay, you're coming to our Friday brown bags. You're talking to, you're working closely with scientists because you're building things that they're using.
And so you're in conversations with them in a different way. It's a really live place to learn and grow. So, I mean, I would come if I were someone with a strong engineering. He was looking to get in this field. I'd be looking for a way to do it with a role like that, rather than trying to say, well, okay, I'm going to go to bootcamp for six weeks.
I'm going to read a bunch of papers. I'm going to do some, you know, prototype like chat bots, and then I'm going to be ready to do a research role tomorrow. I think, you know, maybe there are some people out there that are just like astoundingly fast learners and that will work. But I think for most of us, regular humans, you know, you want to kind of, you want to find a way, like I'm going to build this into my life for a couple of years.
[01:25:01] Jason Gauci: Yeah. And there's so much tribal knowledge, so much tribal knowledge that, I mean, it's, that bootcamp doesn't exist where you're going to learn the tribal knowledge to be able to ace it and assign research scientists, interviews. So, yeah, I think your advice is spot on.
[01:25:16] Joe Bradley: Yeah. And I mean, in addition to acing and the interview was you can like, and you show up on day one, like, are you really, are you ready to do the job? Or are you set, are you setting yourself up to succeed, right?
[01:25:26] Jason Gauci: Yup. Yup. Do you have internships, you have a lot of folks who are maybe in the middle of their college and they're looking for something over the summer, but then they're going to go back to college afterwards. Does that exist?
[01:25:35] Joe Bradley: Yeah. Yeah. In fact, I just we just set up a meeting, I think for next week to sit down and talk about how we're approaching next year's crop of internships and what we do there.
So yeah, we will be doing internships and we do them all year. We have interns right now and, know, for different disciplines and different times makes sense. Right. But there are a lot in the summer. And you know, we've been focusing a little bit on some of the graduate students, you know, the last year or two.
So it's been mostly PhD or master's degree. Students are coming in and doing internships, but I think we're going to change that this year. And have, you know, a little bit more undergrad work to be done that the team's gotten a lot bigger. And one thing that means is that there's, you know, a wider variety of types of work that needs doing.
And I think we have some spots for a really good undergrad research project or two, that, that look pretty interesting. Like I said, we're kind of still in the middle of figuring it out but that's, that's basically where we're headed. And like, for me, I like to make sure that, Hey, if we're going to put an internship together, we want it to be something that's like a little risky, right?
It's a great time to take a risk, but also something where we see like a legitimate chance for success. And like, it's gonna, we're going to learn something either way and everyone's going to benefit from it. So, so I think we'll be in a position to do that with undergrads next year. We're definitely continuously in a position to do that with grad students, you know, that's not going to change.
So we'd love to hear from people who are interested in possibly joining us in that way.
[01:27:04] Jason Gauci: Cool. Excellent. And so if folks want to check out LivePerson, you go to liveperson.com and you can you know, see what they're all about. And if you want to reach Joe, you could reach Joe at jbradley@liveperson.com and you can shoot him.
I'm sure you can shoot him a resume. He'll forward it to the right folks. Or there's probably a place on live LivePerson where there's like a careers page or something like that, where you can use, you
[01:27:34] Joe Bradley: can definitely apply online. You can definitely reach out to me too, if you want to say, Hey and you know, we'd love to hear from the audience here.
As I mentioned before we get started, like, I think you guys do pretty cool. Pretty cool podcasts here. I like how application focused it is. And yeah, it feels like it has a lot of weight to it. So I'm sure your your listenership are all pretty cool people.
[01:27:58] Jason Gauci: Okay. Yeah, definitely in there they're super motivated.
We've actually connected a lot of interns. To careers, to at least to internships. And so, and and, and we're constantly getting emails. The, the show's been out for awhile. So we're constantly getting emails from folks who have landed a full-time jobs, or if you haven't been in them for several years and it's, that's really special.
So, so I feel good about I think there's, there's going to be people out there who are a great match for LivePerson. And and now if you're listening to this, you're one of those matches you know, check out the show notes, check out the website and get connected. It's it sounds really, oh, actually, so we didn't cover location. So where is LivePerson?
[01:28:39] Joe Bradley: Yeah, we're a distributed company now. We took w I mean, it's evolving. Like I think a lot of companies are, but when the pandemic started, we wanted to give certainty to everyone early on. And so, you know, we've said, Hey, if you want to move where you want to move, go move.
Right? Like as long as you can do your job, right. As long as it doesn't impact your work, you know, we're, we're pretty open and we're not sort of demanding people be ready for a mass return or anything like that. Cause we wanted, we want people to live their lives. You know, now that we're coming out of the pandemic, I think we're hopefully coming out, I guess, or researching a little bit.
But as we sort of take steps out, we are beginning to open. Physical offices. Again, we have an office open in Seattle here that I've been to, you know, we do COVID testing either the morning before you come in or you go do it at the office before you come in. We have, uh, we're obviously following all the policies and protocols that we, you know, that are local to the areas and we have our own sort of standards on top of that.
So, and so I think we will be coming back more and more physically we're probably gonna maintain a high degree of flexibility about where people work and, and how that, you know, comes to life. We have some policies and standards around, you know, how to manage the pay scales and things that, that.
You know, you sort of have to do from a corporate sense for the economics to make sense, but I think we've done that in a fair way. And I think we're open-minded and particularly when it comes to great, you know, science professionals and researchers and, you know, folks with a lot of machine learning engineering expertise and those people that are, you know, really like really doing great in these very hot fields.
Like these are the kinds of people we want to be really flexible with. Like we didn't get the most important thing is for you to come and work with us and for us to get some good work done together and the exact details about the location and all that are pretty secondary.
[01:30:22] Jason Gauci: Yup. Yup. Totally makes sense. Yeah. I think that's consistent with where everyone's at and I'm hoping they come up with some nice white boarding tool. I mean, that is really what as scientists. This is the thing we need is some way to whiteboard.
[01:30:37] Joe Bradley: Yeah, we're looking at that right now. We've got a couple of options that I want to get put in the office and try out to see how they work. I don't know if they're any good yet or not, and we haven't really tried out the tech, but I kind of agree. Like I'd love to be sitting here and like have a whiteboard here that I can write on. And that like the writing of my, you know, my partner on the other end of the communication, like appears it's almost as if they were here. Like somebody solves that there'll be billionaires overnight.
[01:31:02] Jason Gauci: Yeah I wonder if, you know, in general, the VR headset is never going to be so convenient that I would wear it all day. But I wonder if maybe that's a, everyone keeps talking about the metaverse. I have no idea what the metaverse is. I'm assuming it's connected to VR, but that might be something where we could go into VR for an hour and then there could be some holodeck light thing. Right. And then somehow at the end it would accumulate a whole bunch.
[01:31:26] Joe Bradley: Yeah. Yeah. Yeah. I think it will be interesting to see how we solve these problems. There are much bigger problems now are more important problems. Now they were a couple of years ago. Yeah. I wonder about VR too. I guess the Metta versus the, what the old Neal Stephenson reference from snow crash or something, which is,
[01:31:41] Jason Gauci: oh, that's right. Oh, it's been so long. I think I had to read that in high school. It's been a long time since I should give it a reread.
[01:31:48] Joe Bradley: It's one of the only scifi books I know of that where the three ring binder appears in a starring role, at least for the first part of it.
[01:31:57] Jason Gauci: That's right.
[01:31:58] Joe Bradley: It's been awhile though.
[01:31:59] Jason Gauci: Yeah. That's where I think they were able to, they someone, I think it's like a murder mystery or something, or so at some point I think somebody dies. Yeah. There's somehow killed in VR. I mean, it's I very, very. Not lucid at all memories of that book.
[01:32:14] Joe Bradley: Language is a virus, right? It's this whole concept he's exploring.
And I think there's this capability to like, you can show people information and it infects them and gets them sick. If you show it to them in the metaverse or whatever. It's been a while for me too. But I think it's something like that. Yeah.
[01:32:29] Jason Gauci: Yeah. We should make that book of the show next time, which means we need to read it again.
[01:32:33] Joe Bradley: Yeah. That'd be fun. If you guys ever want to do a StephensonFest or something, I'll be happy to come back.
[01:32:40] Jason Gauci: Oh, cool. All right. We might take you up on that. Cool, Joe, thank you so much. I know we're over time by, by a lot, but I really appreciate you spending the extra time and chatting with us has been absolutely amazing, and it's been a real pleasure. Thank you so much.
[01:32:55] Joe Bradley: Yeah, the feeling's mutual is a super fun conversation. Thank you so much for giving the opportunity to come on and chat with you guys. Very much. Appreciate it. And I love like the depth and the quality of the discussion. So thank you so much for facilitating that.
[01:33:09] Jason Gauci: Cool. And thanks for everyone out there. I think you continue to support us on Patreon and on audible. And we really appreciate that. I've been trying to post a bit more on Twitter in the past. I've mainly used Twitter to send show notes from trying to put some more content on there. If I see something that I think is pretty cool you know, and relate it to coding and tech I've been trying to share more content there.
So follow us on Twitter, follow us on all the other ones as well, you know, LinkedIn and Facebook, any of those we'll definitely be posting the show notes there every time as they come out and subscribe. If we're not on a streaming platform at this point, let us know. I think enough people have, let us know about enough platforms that we're on all of them, but there's there's new ones coming out all the time. So definitely ones coming out all the time.
So definitely keep us up to date. Keep us honest if there's something we're not on and we'll catch everyone in two weeks. Thank you so much.
[01:34:23] Patrick Wheeler: Programming Throwdown is distributed under a creative commons attribution ShareALike2.0 license. You're free to share, copy, distribute, transmit the work, to remix, adapt the work, but you must provide a attribution to Patrick and ShareALike in kind.