Screaming in the Cloud

In this episode, Corey Quinn is joined by Senior Security Engineering Lead at Mattermost Paul Harrison in a discussion on the often-overlooked ethical implications of artificial intelligence in technology. They discuss how the rapid adoption of AI technologies might compromise user privacy and consent, reflecting on instances where companies may prioritize innovation at the expense of these core values. Their conversation highlights Mattermost's dedication to data privacy and user control, positioning the company as a privacy-centric alternative in the tech landscape.

Show Highlights: 

(00:00) Introduction to the episode 

(01:50) How companies compromise privacy in the rush to adopt AI

(04:10) What is Mattermost? Paul explains the self-hostable, privacy-focused 
communication platform

(06:00) The evolution of chat platforms and Mattermost's unique position compared to Slack

(10:01) Paul elaborates on how Mattermost enables user control over data and customization

(14:23) Discuss the implications of integrating AI in everyday applications and its challenges

(20:35) AI’s potential risks and unintended consequences, particularly in data management and security

(25:14) Paul and Corey critique tech companies’ approach to AI and data privacy

(28:59) Closing remarks and where to find more information about Paul Harrison and Mattermost


About Paul:

Paul Harrison is a Senior Security Engineering Lead at Mattermost, responsible for their Security Operations team. Prior to this he led Security Operations at GitLib, and several other emerging tech companies. Paul has specialized in building security operations and infrastructure security programs, enabling companies to have a secure footing as they grow. 

Links Referenced:

Mattermost Community: https://community.mattermost.com/landing#/

*Sponsor

Panoptica: https://www.panoptica.app/

What is Screaming in the Cloud?

Screaming in the Cloud with Corey Quinn features conversations with domain experts in the world of Cloud Computing. Topics discussed include AWS, GCP, Azure, Oracle Cloud, and the "why" behind how businesses are coming to think about the Cloud.

Paul: The thing that we really pride ourselves in at Mattermost is the fact that we don't want your data. We want you to have a good, consistent, reliable, stable platform that you can build a business around that you can change as you will.

Corey: Welcome to Screaming in the Cloud. I'm Corey Quinn. My guest today is going to talk about AI in less breathlessly enthusiastic terms, I suspect, than many of us have, because Paul Harrison is the Senior Security Engineering Lead at Mattermost. A company that historically has strong open source roots, founded primarily in privacy as an approach.

Corey: So I'm going to go out on a limb and assume here that you have opinions on this, Paul. First, thank you for joining me.

Paul: Thank you very much. And I do have a couple of opinions on it.

Corey: This episode has been sponsored by our friends at Panoptica, part of Cisco. This is one of those real rarities where it's a security product that you can get started with for free.

Corey: but also scale to enterprise grade. Take a look. In fact, if you sign up for an enterprise account, they'll even throw you one of the limited heavily discounted AWS skill builder licenses they got because believe it or not, unlike so many companies out there, they do understand AWS. To learn more, please visit panoptica.

Corey: app slash last week in AWS. That's panoptica. app slash last week in AWS. From my perspective, having played in the open source space for a while and giving a rat's ass about security, I've been concerned by the headlong rush into AI where effectively user privacy and consent seem to have been turned on their heads as companies are desperately trying to participate in what looks like a digital land grab.

Corey: And I feel like security, privacy, user consent, all those things are being thrown by the wayside. Please tell me I'm imagining things because I think that's the happier outcome for all of us.

Paul: No, I Think you're completely on point with that. What we've been running into and what you hear across the industry in the security space where people are largely screaming into the cloud is just how prevalent it's become.

Paul: And everybody is trying to, trying to stick the AI thing into whatever it is and means that there are new privacy policies, there are new terms of service, there's new New things we need to worry about when it comes to the privacy of the tooling, of the stuff you're putting into it, how our engineers or anyone around the organization is using it, and the thing we have to worry about is just where that ends up.

Corey: It also feels like there are bait and switches being done. For example, Reddit recently signed a deal with Google. I used to be very active on Reddit 15 years ago. It didn't occur to me at the time that I was effectively creating content for a company to monetize in any other way than displaying ads next to it, which, okay, fine, I can live with that approach, but no, now we're going to distill it down and spit it out as reformulated versions of you or, you know, give people terrible advice on the internet as a default search response on Google.

Corey: That seems to have moved the goalposts somewhat significantly because even though you had my consent. By deemed acceptance and opt ins through labyrinth, uh, terms of service and policies, back then, we had no conception of where this would go and what we were fundamentally opting in to. I would argue it is, it was impossible to give informed consent because back then, this hadn't been dreamed up yet.

Paul: Well, and so much of it is even the idea that Anything that goes on the internet will stay on the internet and will be there forever. And thinking about stuff that we were doing 20 years ago and just how much you've contributed to the various terrible places around the internet is now just going to be brought into one massive sum of data.

Paul: To produce a never forgetful version of you and whatever that might turn into is something that we may need to be concerned about when it comes to privacy and people getting an opportunity to change their lives or do things for the better, but if this is something that can quickly come around as a profile of you that'll come together, that's great.

Paul: Thanks. An interesting problem.

Corey: Let's take a half step back here. What exactly is Mattermost? Because I have ideas, but I suspect you have better answers than I do.

Paul: Sure. Mattermost, uh, for lack of a better way to describe it is a self hostable, air gappable chat platform. You can call it, uh, Slacker teams that you completely control.

Paul: You can put it wherever you want. You don't need access to the internet and it means that you have full creative control over it. Uh, the vast majority of the code is also open source, so you can go and play with it as you will. And so, what we enable people to do is just that. Our customer base finds incredibly diverse places to put it.

Paul: And use it for just about everything. We want people to have full control of their data, how they use it, where they want to put it, and not be dependent on an internet connection, a remote organization, so they can do whatever they want.

Corey: This feels like a product that would have taken the world by storm in an era before Slack, because back then you had HipChat, you had EJAB or DServers, you had people running their own IRC servers inside of companies.

Corey: But what Slack really brought to the forefront was the idea of persistence between different devices and different points in time. Like historically with IRC, if you weren't connected at the time, except through a bouncer or something, you did not ever see what was being discussed, and that's kind of a problem in the workforce.

Corey: And also, you could give it to someone who worked in marketing or accounting who is not themselves an engineer and just give them the link and suddenly they would show up in the community chat. And I love the idea, it's almost like a multiplayer notepad from my point of view, but there is no self hosted option of Slack.

Corey: And increasingly, there's a bit of a network viral effect. I would love to get off of it myself, but we talk to most of our clients via Slack connections to other folks. So, it's not a matter of getting us off so much as getting a whole bunch of very large companies off of it. And that is, feels like an impossible task.

Paul: Absolutely. Well, I think, like, when you go back to when this When, when Slack really started kicking off, this was in a bizarre state where mobile was just picking up. People were so tired of email being the default thing or like Skype and a lot of, and like Amazon Messenger and all the various things that they wanted to glom towards one central thing that would keep a persistence.

Paul: And that persistence would work across mobile, desktop, and wherever else you were going because people were becoming just increasingly Accessible at that point. And Slack was really in the right place at the right time, I feel, to hit that off. And then we saw a lot of the trickle that people became just accustomed to it.

Paul: A lot of competes happened between teams and well, frankly, us for that matter. And no one really considered, and like even going back to the AI topic, no one really considered Just the amount of data that was going in there, the level of persistence, just what kind of stuff was going to be sitting there, how it was being run, and how dependent you became upon it.

Corey: That's part of the problem, is that it starts growing in these weird ways. I have to say that post acquisition by Salesforce, Slack has gone from one of those companies and services I really liked to something I almost dread using. They periodically will just push out complete user interface changes, they'll redo how workflows go constantly, and then they'll want you to put your core business logic into Slack apps in different ways.

Corey: You've got to be insane. Because the whole reason and the whole thing that I want when I'm building a workflow for a business unit is I want to put the time in once and then it exists and then I don't have to touch it again, much less on someone else's schedule because they're trying to deprecate a thing and shove out new interfaces and new ways of working.

Corey: Uh, most recently I kicked a bit of a beehive when I stumbled across the The, uh, their policy saying that, oh, we will start training LLMs on global data, and we'll make sure data doesn't go between things, but it's going to improve the experience for everybody, and if you want to opt out, email this address from the specific address registered to Slack as the owner of the team, and with this specific headline, with this specific phrasing, and it's like, This should be a checkbox or even a slack workflow.

Corey: Having to jump through these particular hoops means no one's going to actually do it. So don't tell me this is about, uh, that you're the future of work when this is where you start shoving people into it. And the counter argument has been like, well, if it was opt in, Who in the world would sign up for it?

Corey: Which is kind of the point, because everyone talks about how valuable this data is, even if you have no problem with a company, why would you give it to them for free? You don't, you're not out here doing volunteer work for other multi billion dollar companies most of the time, so what's going on?

Paul: Well, absolutely, and the idea of the opt in opt out for any type of tooling, even self hosted pieces, there's always been the conflict of, you know, You know, if you have this as an opt in, the vast majority of admins are never going to go and flip that bit to start sending you stats or data about whatever you're hosting.

Paul: And then if you apply opt out, you're going to get some level of repercussion. Obviously, your, the beehive you kicked clearly represented that. And the reality is, is that they want this data. And they want to use it because they want this data because this is how they will be able to profit from it. Ends.

Paul: That leads to the question of, like, they accepting the risk and understanding the fact that the opt out is going to leave with these repercussions, recognize that, that Corey is going to go and, and kick the beehive, uh, and still, that may have been something they consider as part of the risk analysis of the process.

Paul: The thing that we really pride ourselves in at Mattermost is the fact that we don't want your data. We want you to have a good, consistent, reliable, stable platform that you can build a business around that you can change as you will. And that means that when we give you a new version, your stuff is going to work.

Paul: And you can build this stuff around not thinking the fact that we're just going to go drop a random new UI on you because we decided that accessing multiple workspaces is something you didn't actually want to do in the first place. We are working in, in the AI space as well. But the difference is, is that we're building this so that you can choose your own adventure.

Paul: You can go and point it at whatever language model you want, whether you're going to self host it, whether you want to make it public. Do whatever you want. We're building that interface so that you can go interact with it however you want it to be.

Corey: Which is a powerful and useful thing.

Paul: It is. And that's the thing that I feel most comfortable in as I'm working through this process and contributing back to, you know, the community.

Paul: Like, enabling an open source program like this is that you can do however you want and play with it however you want to enable this and enable companies to not have to worry about us trying to slip up that data to go build some new other language model for whatever bizarre purposes. Because I don't want your data.

Paul: I don't want to see it.

Corey: Few things are better for your career and your company than achieving more expertise in the cloud. Security improves, compensation goes up, employee retention skyrockets. Panoptica, a cloud security platform from Cisco, has created an academy of free courses just for you. Head on over to academy.

Corey: panoptica. app to get started. There's also this idea that we've had that seems to be eroding in a bunch of different places. When I grew up, I was always assumed that, okay, you know when you're being recorded because there's a light on or something and you're in a studio type of thing. Then we all started carrying massive recording equipment with us and high definition cameras on the supercomputers in our pockets.

Corey: But even now in this conversation, the laws are such that I live in California. We're having this discussion over an online service where we have video and audio being recorded. If I had hit the record button on the audio and it did not tell you and I did not inform you this was being recorded, I'm guilty in California of a misdemeanor because all parties are not aware that it's being recorded.

Corey: And while you do have lessened expectations of privacy in a work context, most people have not yet made the mental leap to the idea that literally anything you write Or say, on a computer, can now, more or less, be used in perpetuity to train models on, to be, to be brought up in unfortunate circumstances, etc.

Corey: Now, who was it that said, uh, find me six sentences written by the most honest man, and I'll find in them something to hang him? It's the, there's this, uh, This entire philosophy that suddenly we live under in a panopticon, and that's not something I think a lot of us have signed up for.

Paul: In a lot of ways, the closest thing I could describe it to is like the Boil the Frog analogy.

Paul: In that, we, we grew up in a world where the, where the internet was very much like the Wild West. There was, it was lawless, it was just a little wild, and people did largely what they want, but we also didn't have the tech to go and record video in that, like, Logitech round thing. Right. Governments

Corey: got very upset, but what could they realistically do?

Paul: That's it. But as we've gradually progressed over time and enabled so many things, then people become dependent upon all this cool, fancy tech. They never realized just what the complete implications of what the thing is. And this is even like in the security world, the common theme that, that I run into on a basically daily basis is someone's like, Hey, I just signed up for this new service and I'm now using it to rewrite half my code.

Paul: I was like, that's cool. Did you look at the terms? Are you actually an authorized person to sign up for the service on behalf of the company? Is this like, how, what are the implications of the day you're now inputting into it? Because before, the worst thing that was going to happen is they might use some of this data and go ship it off to some advertising company to go do a thing, but now they're turning this into a language model and now contributing back to whatever grander scheme that they're planning or selling it off to go do that thing.

Corey: There's value to a lot of these AI tools and services. Do not get me wrong. I am not trying to hold back the tide of progress. I use a number of them myself in different ways. I do, for some of the other stuff I do that is not Podcast conversations like this. I use AI for transcription. Uh, there's a woman named Cecilia who does the transcription of these podcasts, and she's delightful and better at it than I would ever be.

Corey: But for a lot of my notes into a voice recorder, I will wind up doing the transcription of that using the Whisper LLM on my own device. And sure, it takes a while, but I run it overnight, so I don't particularly care. And that turns it into something that I can then work with in textual format. That's great.

Corey: This is, this is great stuff. You don't need, for something like that though, you don't need to hoover up every interaction that humans ever have in the course of their lives in order to understand how a conversation works.

Paul: No, I agree. And I think I've been really mixed about this process as well because I think, and this may just be the fact that I've been doing security and various messy things for such a long time that you kind of become jaded at some of the stuff, right?

Paul: But AI, and everything that we're seeing here, could very well be that conversation that you had inevitably in junior high or high school, where your math teacher's like, you're never actually going to have a calculator in your pocket, you need to learn this long division.

Corey: And here, the reality of it is that we have a, we have ambient computing, where you can ask it for the square root of any number to an empty room, if you preface it with a wake word, and you'll get an answer on the spot.

Corey: Of course, LLMs are a bit of a different story, because we've finally, for the first time since the Pentium chips, uh, have created computers that are bad at math. It'll be convincing, but it'll also be wrong.

Paul: It's true, and like, maybe if you just poured a bit of Elmer's glue on that, it would work better.

Paul: Yeah,

Corey: exactly. What's the worst that could possibly happen?

Paul: Listen, it's not toxic. It's fine.

Corey: My god. It's, ugh. And it feels like these companies are rushing all over themselves to get these things out. And, uh, They're not paying any attention to the incalculable levels of brand damage that they are doing to the reserve of trust that they have created over, in Google's case, damn near

Paul: 30

Corey: years.

Paul: I think it's come to the point with this kind of tech, and even the technology that was completely missing from RSA this past year, because I don't think it I don't think I was pitched a single time like blockchain is they've jumped directly into this world not considering whether they should just that they could go and do it because inevitably they have some belief that somewhere this is going to be a great thing to integrate the most recent example of this is Are you familiar with iTerm?

Corey: You mean iTerm2? iTerm2, I apologize. It's an entire history that I'm aware of. I wasn't aware there was a 1. But yes, yes I am.

Paul: In a recent update, within the last week, they decided to integrate ChatGPT into it. That, very similar to yourself kicking out, kicking a beehive, also, uh, led to a massive problem where people are like, well, just think about the kind of things that people paste into a terminal.

Paul: Whether this stuff should be going to ChatGPT, whether this should be going anywhere off your local system, and just what kind of concerns there are. And, Even and based upon the lengthy threads on GitHub, it sounds like this is entirely was benign and that nothing was happening unless you actually put a token in.

Corey: Yeah, you have to give them an API key explicitly enabled, let's be clear. I term two is GPL license. It is open source. This is, I don't think this is anyone nefariously trying to do these things and there were a number of people saying this would be super handy. I have trouble sometimes with chaining together a number of the right terminal commands.

Corey: Being able to have it create that is useful. And I agree, and being able to, and this is most definitely an opt in, because until you put your API key in, it won't do anything, so this is probably the right way to do it, but I think everyone is so sensitive right now to the idea that everyone is going to be shoving AI into everything, that's a problem, and my big concern with so much of this generated content is that There's this central conceit that people will care enough to read something that you could not be bothered to care enough to write.

Paul: Part of the challenge that I've really seen in a lot of this is that they're taking it for granted. That they assume the data that's coming through there is going to be accurate and they'll just let it push. And this is one of my biggest concerns with like, uh, get up copilot. For example, I'm perfectly content with people going and testing it.

Paul: I've played with it a fair bit and I think it's about 70 percent accurate, but if people are gonna start committing that code and actually pushing into repos. If they're doing that without understanding what it's doing, if they don't actually understand the code that it's writing, that's a problem. Not necessarily whether the, whether it's malicious or not, because that's a whole other avenue we'll have to worry about, I'm sure, eventually, but it's not contributing back to what it is that we're doing.

Paul: Maybe that code doesn't actually function the way we want to. Is it going to contribute new bugs to the problem? Is this, what, is there going to be a positive outcome to it?

Corey: That's a good question, because so much of the problems that you ran into was With dealing with all of these things, it's just this lack of awareness of where this is going.

Corey: The technology is evolving so rapidly that it seems like a really neat trick and suddenly people are trying to pour it onto everything. And what I'm worried about is what they are stepping around in order to, in their mad rush to outrun their competition. I pay my cloud provider for a variety of things.

Corey: One of them is Security. Another is sanity around certain things. The platform has to work in a reliable, repeatable way. Now that you have computers doing definitionally non deterministic outputs, that changes a lot. Maybe I don't want it next to the database or the storage or the network plane or God forbid the IAM system.

Paul: Absolutely. And I think I, I have to hope and I'm going to knock on wood that This is not gonna, will not be a scenario that I have to run through as we're doing like a forensics analysis of a security incident that it goes from a, someone submitted a new access request, it goes to black box, comes out as a new like AWS policy.

Corey: I honestly want some of that on some level where instead I still, I gave a talk at, during RSA at Wiz at W'S conference. They asked me to give the keynote and sure, why not? I love the sound of my own voice and I gave some examples of generated policies that were flat out wrong and the, and one example I gave from reality in one of my accounts is for a code build role where it had a bunch of different permissions assigned to it.

Corey: And the last one that I assigned to it was administrator access because I couldn't get it to work properly. And there's a, to-do to fix this, but the last time. Time that thing was touched was 2018. So clearly I've not gotten to that to-do yet. It still works. It does. Exactly. So why would I go back and fix those things?

Corey: And that's the challenge when you get security by having to dial in a bunch of things where you, it by default, nothing can talk to anything else. And you have to enable it piece by piece by piece. Eventually people take shortcuts and allow too much and they never go back to fix it. Now AI could absolutely solve some of these things, but it doesn't seem like that's what anyone's building.

Paul: That's it. And I think it's. People are coming up with the, with the cool approach or something that is the fastest thing that they can come up with that they might be able to get a profit off of. And it's, it's an interesting challenge. And even when we're trying to implement a lot of this tech and trying to figure out how to make use of it, half the battle is like even just writing a prompt.

Corey: Oh, yeah, and prompting itself is a skill, and my running private joke has been, well, like a public now, is that it's called prompt engineering because it is such a foreign idea to so many engineers to clearly explain what it is you mean and what you're asking for.

Paul: Now, if you also coined that term, I've had several people reference it at this point to me about this exact topic, so you're I've mentioned

Corey: it in passing a few times, but it's the, and I've heard it from others as well, it goes around, it's a truism.

Corey: Part of the problem with it is that I'm, prompting works for me, I'm relatively decent at turning a phrase, but a lot of the use cases that I have for Gen AI are relatively banal, and that's fine. For example, I mentioned that I'll take advantage of my voice notes. I'll then slap it into ChatGPT or Opus or something else and say, here's a voice transcript, please clean it up.

Corey: And it removes repeated words, stutters, etc. very well. And it, it goes from very good to nearly flawless. And that's terrific. Now, would I publish that as a blog post? Absolutely not. Because there, there's still some structural editing that needs to happen as a result. My stream of consciousness does not look like Clearly explaining a point from A to B half the time.

Corey: Uh, another thing that I'll use it for a lot, especially when dealing with some of my vendors and, uh, having some communications challenges is I'll write an email and then have one of these things say, Great, make this friendlier because sounding like a raging ass is easy to do, especially when you're frustrated, but you can always go back and raise the emotional temperature.

Corey: It's very hard to turn it back down after you call someone a jerk. It's what emojis are. Exactly. It's just fine. And sometimes it puts those in, which is kind of amazing.

Paul: You know what? I could really appreciate that. The thing is, and there was a thread on Hacker News yesterday, the day before, of people saying like, hey, what's your default prompt?

Paul: And it's people basically, what, whatever they keep in a text document that they'll paste into every new prompt they go into. And that's great for 95 percent of what you need to accomplish. The moment you need to get anything specific, the amount you need to know about whatever it is you're asking a question about, and the amount of time it'll take to actually write the prompt, makes me wonder whether you should have just gone and written the thing in the first place.

Corey: One thing that I like is image generation, the way that ChatGippity does it in particular. Which is, I tell it what I want, and it then applies the system prompt to it, for starters, which in this case, as a part of it, generate up all images, unless instructed otherwise, in 16 by 9 aspect ratio. So suddenly, boom, I can use that in a slide deck.

Corey: Then, it winds up running it through some language processors to create a much more thorough, in depth prompt, other than the one sentence I put in. So it winds up spitting out an image that it creates, but then my question is, what is the prompt that generated this? And it comes back with a paragraph or two.

Corey: So I like that multi stage approach. It's below my level of having to care about it, but it's also why I can say, okay, now add a giraffe and it'll go ahead and do that without having it look wildly different in some cases.

Paul: And that's, that's really valuable for, for very narrow use cases like that. But if you're looking for something to go and recreate a, well, the common theme I've, I've been seeing recently is, uh, people having their resumes, CVs produced by them, or answering hiring questions, uh, where it's very apparent that people are copying and pasting the results into, into applications.

Corey: And I don't blame people in a heartbeat for that. I was never good at getting hired for jobs in the front door. My resume accurately portrays my experience, but for whatever reason, it doesn't dot whatever I or cross whatever T that systems were always looking for historically. So great, here's my actual experience.

Corey: Go ahead and turn this into a resume if I were in a situation at any point in the last five years where I needed to do that would be a slam dunk, step one, go ahead and make this work. And the, because my philosophy has always been when you're looking for a job, you're, you first go through the applicant tracking system through the application, then you talk to a recruiter, then you talk to potentially someone else in HR, then you start talking to the hiring manager, and that's the first time where you're talking to someone who can say yes.

Corey: Everyone before that can only say no. And I've never been a big believer in spending time talking to people who can't give you the yes. So, great, dot whatever I's, cross whatever T's you need to get past that. Someone has pointed out that Gen AI is very good as a bullshit generator. The problem is, is that so much of our society and the things that we deal with in that society are not good.

Corey: are in fact surface level bullshit. Now, start asking it about something you know well and start really diving into it, and it becomes extremely apparent radically quickly that it doesn't know what the hell it's talking about. But it sounds good. I've used it myself to generate first drafts of DR policies when the actual realistic answer to the policy is We fix AWS bills for a living.

Corey: The root source of truth is the AWS bill itself. If that somehow goes missing, we don't have a problem. It also doesn't have a 24 7 SLA around this because it happens at banker's hours in Seattle. And in the event of a natural disaster, no one cares about the cloud bill for a little while. But you can't say that outright.

Corey: You need to have a policy you can point at. And as you wind up working with customers with different SLAs, you can iterate on it and go from something that checks a box to something more meaningful. But that's fine. Now, there are times you don't do this. You don't do this for your employee handbook. Or your harassment policy, for example.

Corey: Things with legal consequences that will hit you with a stick. Yet you pay professionals and take their advice. That's why you're paying them in the first place.

Paul: I think what you've really covered is the, is the crux of the problem is that people are picking whatever the results are at face value. And so much of what we've spent our time in security in the last 20 years on the internet is making people take a second thought, is making people consider what is coming in at you, not real.

Paul: And that's the inherent phishing problems. This is everything that we've tried to accomplish to make sure that people don't put their passwords in the terrible places. The level of trust that people have just assumed in the new AI world is concerning because not To do with the data or anything like that, it's just that the output is completely, completely legit.

Paul: This is exactly what you want to see 100 percent of the time and it's, it's accurate. It's the Elmer's glue problem, it's the like, it's okay to eat rocks every day.

Corey: Yeah, there's a consistent problem here and I don't know what the right answer is to fix any of it.

Paul: I think it's really just, People taking a second thought.

Paul: People, people need to, and as, as much as I, as I say that people are like the squishy human problem inside of security, it's just people taking a step back and, and taking a second thought about what it is that they're reading, what it is that they're acting on, and realizing that this is all profit.

Corey: There are certain similar circumstances I've seen just with the security implications of AI themselves, where companies are like, okay, we don't use generative AI here because we can't trust them, etc. Great. Fine, but how do you stop people from copying and pasting the wrong thing into a, into a form somewhere?

Corey: Like I, for example, whenever I have it write an email or something that's in any way sensitive, cool. It's about, great, I turned this, turned this obnoxious email, but it's to John Doe at Acme Corp. So great, then I can go through and do the, uh, find and replace and that's all fine. And I don't, and I can, I can mung any, any details that don't need to be there.

Corey: But not everyone does that. I mean, I see similar patterns when I'm talking to people on the phone and they happen to be in an Uber and, like, they're mentioning sensitive things on the phone conversation. It's, do you, do you think your driver's a robot? I mean, Waymo technically is a self driving car service here in San Francisco and it is a robot.

Corey: But there's still a camera in the car, and it does record audio from time to time, and they give the warning about that. So, maybe that's not the best place to have the hypersensitive conversations.

Paul: That might even be the next way that people are training language models. It's just picking up the Actually, that would be amazing.

Paul: The, uh, every cab in Vegas has a camera and a microphone in it, and it warns you as soon as you walk into the cab. If someone's collecting that data and putting it into a language model, I couldn't imagine what's going to come out of that.

Corey: Remember every time you call somewhere these days, and you go through the phone tree, which no one wants to do anymore, but it's Thank you for calling your shitty bank.

Corey: This call will be answered in the order it was received. This call may be monitored for quality and training purposes. It never occurred to me that in that whole spiel that the word training meant AI training, but here we are.

Paul: It certainly is now. Uh, like, I, like many people in the industry, we all cut our teeth doing tech support at some point.

Paul: Mine was AT& T WorldNet in like 1929 or 1999. Yuck. Please be patient with me. I'm from the 1900s. Different time. And it's, but absolutely. And that's, but it goes back to the technology. At the time, there wasn't the tech to hold that kind of data or be able to process in the way that it is now. There's just much greater opportunity to just tear it apart and manipulate it.

Paul: We can even go down the avenue of what happens if and when like quantum tech manages to decrypt everything that they've been holding on to for forever.

Corey: Yeah, well, won't

Paul: that be

Corey: something to

Paul: watch? It'll

Corey: be

Paul: a different problem.

Corey: Ugh. Or the same. It could be a fascinating language model. So, thank you for taking the time to speak with me.

Corey: I'm curious if people want to learn more. Where's the best place for them to find you?

Paul: I'm on LinkedIn. You can go to community. mattermost. com and come join us and have a conversation. I'm on there. And that's an, that's where we collaborate on our open source platform. Uh, that's probably the two easiest places to get ahold of me.

Corey: And we will of course put links to that. Thank you so much for taking the time to speak with me about all this. I appreciate it.

Paul: Oh, thank you very much.

Corey: Paul Harrison, Senior Security Engineering Lead at Mattermost. I'm Cloud Economist Corey Quinn, and this is Screaming in the Cloud. If you enjoyed this podcast, please leave a 5 star review on your podcast platform of choice.

Corey: Whereas if you hated this podcast, please leave a 5 star review on your podcast platform of choice. Along with an angry, insulting comment that no doubt was generated by an AI system that you've trained on other people's personal data without their consent.