Cyber Sentries: AI Insight to Cloud Security

Exploring the AI-Powered Future of Cloud Security with Thomas Johnson
On this episode of Cyber Sentries, host John Richards interviews Thomas Johnson, CTO and co-founder of Multiplayer, about how AI is transforming cloud security. As AI capabilities rapidly advance, Thomas provides insights into how engineering teams can leverage AI to enhance workflows, generate code, and convert basic sketches into functional systems.
John and Thomas dive into key questions surrounding AI ethics, choosing open source vs proprietary models, and best practices for handling sensitive data. Listen in to hear Thomas' advice for developers looking to integrate AI into their tech stacks.
Questions we answer in this episode:
  • How are dev teams currently using AI like Copilot?
  • What are the main differences between neural networks and other AI?
  • What security risks exist with generative AI models?
Key Takeaways
  • Focus on choosing the right problem and having clean, quality data.
  • Open source models offer more control compared to proprietary models.
  • Do not put sensitive data into generative models.
This fascinating discussion explores how AI is transforming cloud security and development workflows. Thomas provides practical insights into leveraging AI's immense potential while avoiding pitfalls. Whether you're an engineering leader or a developer new to AI, this episode offers an enlightening look at the AI-powered future of tech.
Links & Notes
  • (00:00) - Welcome to Cyber Sentries
  • (00:22) - Meet Thomas Johnson
  • (01:02) - AI Background
  • (01:58) - Neural Networks
  • (02:47) - Current Buzz
  • (04:43) - Integrating AI
  • (07:41) - Improving AI
  • (10:57) - Think About the Problem and Data
  • (12:25) - If Data Is the Problem
  • (14:00) - Securities and Access
  • (15:50) - RAG Model
  • (17:52) - Open Source v. Proprietary
  • (19:20) - Training and Inference Side
  • (20:35) - Multiplayer
  • (21:43) - Wrap Up

Creators & Guests

Host
John Richards II
Head of Developer Relations @ Paladin Cloud The avatar of non sequiturs. Passions: WordPress 🧑‍💻, cats 🐈‍⬛, food 🍱, boardgames ♟, a Jewish rabbi ✝️.

What is Cyber Sentries: AI Insight to Cloud Security?

Dive deep into AI's accelerating role in securing cloud environments to protect applications and data. In each episode, we showcase its potential to transform our approach to security in the face of an increasingly complex threat landscape. Tune in as we illuminate the complexities at the intersection of AI and security, a space where innovation meets continuous vigilance.

John Richards III:
Welcome to Cyber Sentries from Paladin Cloud on TruStory FM. I'm your host, John Richards. Here, we explore the transformative potential of AI for cloud security. In this episode, our guest is Thomas Johnson, the CTO and Co-Founder of Multiplayer. Tom shares how engineering teams are using AI to enhance their workflows, generate code, and even convert a lunchtime napkin sketch into working system architecture. We'll also discuss choosing between open-source and closed-source models. Let's get started. Welcome. Today we have our guest, Tom Johnson, Co-Founder and CTO at Multiplayer. Tom, thank you so much for coming on the podcast here. I really appreciate it.

Thomas Johnson:
Thanks for having me.

John Richards III:
I wanted to start out hearing a little bit about how you got involved in AI. I know from our earlier discussions you've actually been involved in AI for quite a while, so can you share a little bit about how that started?

Thomas Johnson:
I started in college back in the late 80s. I was interested in neural networks, and came across the McClelland-Rumelhart book that is a classic about neural networks, and just thought, "This is just so amazing, this idea of being able to have software that learns from data, from examples." I was just really hooked. So, that's how I got my start. I went into robotics, speech recognition, and other places where I was able to use some of the technologies in the early phases back then, and it's been amazing to watch the technology evolve since those early phases.

John Richards III:
You're using the term neural networks here. I'm curious how that's different. I hear a lot of folks talk about AI, generative AI. What's the difference? Or, is this just folks co-opting new terms instead of using neural networks?

Thomas Johnson:
Well, AI is the umbrella for all of the different types of algorithms that have to do with things like machine learning. Expert systems is a subset of AI. Lots of stuff fall under the label of AI. Neural networks are a subset of machine learning. When people talk about AI, and especially the things that are happening with large language models, they're probably talking about neural networks. The generative stuff is really based on neural networks. That's the core. So that's why, for me, I'm using that term more specifically.

John Richards III:
Can you talk a little bit about how neural networks have evolved, and the research around this has evolved from back then to where we're at now where, all of a sudden, hot new buzz, everybody's been talking about it for a little over a year when the first big gen AI rolled out. Why is this buzz now, and maybe not back then? What changed, and how did we get to where we're at today?

Thomas Johnson:
In the beginning, you started with very small networks, a small number of layers, maybe three layers, a small number of neurons, because it's just the beginning. You didn't have necessarily large computers. You didn't have GPUs yet. You were working on 286 computers, so you were dealing with the limits of computation, memory, data sets. Plus, in the very beginning, it was starting small. Eventually, as compute grew, access to better data, better algorithms, experimentation with different types of networks, you could get deeper networks. It gets value of that. You have better data, so you could train more, train better, and experiment more.
So, it took a little while. In the 90s, you had this slow down phase where the compute and memory needed to catch up to where some of the early algorithms where. When that happened, then the algorithm started to change. You could deal with better data, and better compute, and better memory, and now it's all coming together, and that's why you see so many different types of machine learning algorithms, and that's what led to the latest advances in large language models. It's really an exciting time because now you've got great compute memory data sets, all of this stuff coming together, and it's just really exciting.

John Richards III:
Yeah, just an incredible boom right now. With that, Multiplayer is a big player and the DevOps and the building application space. So I wanted to get some of your insights into how you are seeing development teams begin to integrate AI into their pipelines, that process. What are you all seeing as AI is ramping up where developers are looking to use this to increase their workflow, or make things easier or faster?

Thomas Johnson:
At Multiplayer, we've got a product that helps teams to work on distributed software. It's tough to collaborate. It's tough to know what you have, system architecture, components, API's, all of that stuff. So, we focus on that. We are using AI in a number of places for our products. But I would say that, for our customers, you're seeing developers use copilot to help with generating code so that you don't have to do all of the boilerplate stuff yourself. That's such a nice thing to be able to do, to start typing and fill in the rest of the function without having to do it again for the thousandth time. That's nice to have.
We are taking a system perspective to generating code, and APIs, and architectures. Because we have the data together in one place, we can do some really interesting things with that. We're seeing a lot of developers and a lot of companies trying to figure out, how do we take these advantages with large language models especially and put it to work for us? Whether it's taking data that they have, putting it in a vector database, being able to do some intelligent prompting with RAG methods and others, or refining models.
I think people are looking for opportunities to use this technology to improve workflows, automate tasks. A lot of it... I think there's a lot of exciting things and flashy things that are going on with generating videos, and images, and stuff. But to me, the more seemingly mundane stuff is as exciting, to replace manual workflows, or to accelerate things by generating content. Acting as an editor as opposed to somebody who has to generate content from zero is just as exciting. You're seeing that happen in a lot of different industries.

John Richards III:
I talked to a developer recently, and he was saying what he loves seeing in the AI space is when folks are using AI to enhance humans, rather than... Everybody's a little worried about, "Oh, am I going to lose my job?" There's some talk about that. But there's a lot of opportunities just to make humans better, remove some of that mundane task, or accelerate the way we work. We can work in new ways. Now, you mentioned that Multiplayer is looking into using AI to help improve some of these. Can you give maybe an example or two about how you all are looking at incorporating AI into what you're doing?

Thomas Johnson:
One of the things we do today is, we allow you to take your napkin drawing, or your photo of your whiteboard for your system architecture, or the system you're looking to develop, and basically upload it. We'll read it, interpret it, try to generate a system architecture for you. Right now, in our first phase, it's more about documentation, tracking what you have. But in the near future, it will be about actually creating a platform from scratch, from a sketch, being able to click to deploy that using reusable components. So, if you see Redis, and MongoDB, and other things in your system architecture, plus maybe a custom component or two, wouldn't it be nice to just sketch that, upload it, generate a platform, be able to click to deploy? We're not far from that. We'll probably demonstrate that capability in the summer.
But, you also have other things. When you put information together, you've got this powerful data set. So when you have system architecture, components, APIs, other things, we are working on a new way to interact with your API. So imagine, instead of setting an open API file or a swagger link to somebody who uses your APIs, and they have to scratch their head and say, "How do I use this? What do I call first and second?," you could just chat with it and say, "Okay, I'm looking to get this, generate some JavaScript. Boom, try it out." It's a better way to interact with your APIs and your systems, and that's definitely an interesting use of large language models. We have lots of plans beyond that.

John Richards III:
It's not a, "Here's brand new way of working." It's taking ways we're already familiar with. If you're in an office and you want to know how something works, you don't go read the docs. At least that wasn't what I did. I went over and I talked to the person who knew about it, and it was very conversational. So I love that this is bringing that. Maybe you're working remote and you don't have that same thing, but you could still bring that conversational. Sometimes you don't quite know what you're looking for, and you need to talk around it. So, being able to do that in this case with docs seems interesting.
Of course, I love the back of the napkin or whiteboard, because it's also another way that developers work. Very often you might be like, "Let's go out for lunch, and let's just hash out what's going on here." I'd sit down with some other folks, and we'd just be scrawling something. Then when you get back you're like, "Oh, now I got to recreate this." But the idea that you could scan that in, or maybe after a meeting you always take a picture of like, "What's up on the whiteboard? How do we save this?" But, that could go the further step and say, "Let's analyze this and actually build out documentation based on this, or actually build out that infrastructure," is fascinating.

Thomas Johnson:
Thank you. I agree. We're looking for... I give this advice to people looking to get into AI, "Look for current pain points. That won't change. If you can use AI to solve problems for current pain points, then you have a real opportunity, for a business, or a new feature, or whatever."

John Richards III:
For those folks out there looking to try and solve these, somebody's got an idea and they're like, "All right, I'm trying to find the right algorithm that's going to work, the right LLM model, or whatever that I should use here," what advice would you have for them? How should they go about that? Or maybe, what are they missing as they're trying to put this together?

Thomas Johnson:
What I would say is, don't think about the model, think about the problem and think about the data, because the models are very easily accessible. There are so many new developments, trying to keep track of everything is impossible, but there's access to great models. It's easy access. So the problem isn't necessarily in training a model, and getting it to do what you want. It's in choosing the problem and getting data, and having good quality data that you're going to spend your time on, and you want to spend your time on.
You want to not waste your time on problems that are really problems, and you're solving something that nobody really wants. So, that's an important step. But then, once you choose the right problem, having data, clean data, as clean as you can get it, enough data. Sometimes, depending on the problem, it could be just a little bit of data. But most times, it's a lot. That is going to be the biggest hurdle to applying these machine learning models to solve a real problem. So, data is probably most important in my view.

John Richards III:
Fascinating. Maybe it's the model providers, everybody's talking about, "Which is the better model?" Now I'm wondering, are folks out there... Bad data, but they're so focused on, "Well, maybe it's my model that's a problem. I'm going to swap to something else, or I want to try something else." What does it look like if actually the data's the problem? Is there a way to identify if you've been trying to do this, and you don't have good data? Is it, you're just junk in, junk out?

Thomas Johnson:
Yeah, pretty much. It's hard. I don't think there's one way to identify whether you've got bad data or whether it's the algorithm. I think that you have to dig into the data, don't treat the data as a black box. A lot of times, you have to go in and look at rows, and look at raw values, and just try to understand it yourself. If you can't understand it yourself, it's probably going to be difficult for a model to understand it and see patterns, if you can't see patterns yourself.
So I would say that the sanity check of just doing the hard grunt work of looking at data, understanding your data is really important. And, developing an intuition yourself, because you'll also... There are lots of ways to solve problems. There are lots of algorithms to use. It's not just data in to this black box neural network and then you get something good out. There's lots of ways to configure these models, lots of parameters to tweak. If you have an intuition about the data and understanding of the data, the rest of it is a lot easier, I would say.

John Richards III:
I know with data... Also, that brings us a little bit into the security aspect here, because folks are sharing data, and then they're a little bit worried who has access to this data now. So what are your thoughts from a security perspective? For folks looking to build an application, like on a neural network or something, what should they be thinking about in respect to saying, "Hey, I've got this data. How do I make sure that it's not leaking out somewhere?"

Thomas Johnson:
Yeah, that's a great question, and I think this applies more to the generative models than others, because you're dealing with data that's maybe out in the internet, or maybe in your corporate network, and you're doing some mapping, and maybe it's text-to-text, or something where you could have some input that leaks out, or some output you've been trained on that leaks because it's almost memorized by the network. You see this with some generative models that generate images, and you see copyrighted material.
So, think about how that happened. It was part of the training set it memorized came out. Imagine if that's corporate data. So I would say that, if you're using generative models, be careful with how you train. If you're using a RAG model, you have more control. But if you're refining, if you're training a model, and the data is inside the model, so to speak, do not expect the model to keep it secure. So you can't ask prompt and say, "Please don't tell my secrets." If the secrets are in there, they will get out, is the assumption you have from a security perspective. So I would say, if you don't want it out, do not put it in.

John Richards III:
For those of us who aren't familiar with how retrieval augmented generation, or RAG, differs from ChatGPT, can you share a high level overview of RAG models, and how they help secure sensitive data?

Thomas Johnson:
That is still using large language models but in a particular way. So you've heard about vector databases. Imagine putting some of your documents or knowledge base, vectorize it, put in a vector database, and then referring to it using some of that information in the prompt itself to a large language model. That's a RAG approach. So that's taking an existing model that has certain knowledge and, inside the prompt, inside the context window, adding a little bit of knowledge, and then maybe the question as part of the context, and letting the large language model use the context question and answer it. So, that's that approach.
Another approach is what's called refinement, whether you're actually making changes to the model, or training from scratch, your own model, which most people would not do unless you're one of the biggies. So you've got those two approaches, and a RAG approach to enhancing the knowledge that large language model has is the go-to method for doing some of interesting things that you're seeing. From a security perspective, that's data. You're passing the large language model, you're passing probably the question that's being asked. There are no guarantees that data will not just get spit out back to the user. Depending on your application, you want to be careful. You just have to be careful with your data. You can't trust the process to make it secure for you. You have to make decisions about what data you want to be used, and how it's used, and it can be tricky.

John Richards III:
There's a lot of different models out there. Some of these are like open-source, and then some of these are proprietary. What are the considerations for folks out there looking to choose one of those? Does that factor in when you're thinking about this?

Thomas Johnson:
I am a fan of open-source models. You can have complete control over the model itself, the use of data as you're passing it through. If it's a black box service, yes. They say, "Hey, we have these terms. We won't use your data. Trust me." I mean I do trust some of these companies, but you have to be cautious, I'd say, about that. There is a risk there that you don't have with open-source models. It really depends on the problem, what you're looking to do. Ultimately, you have a lot more control over the open-source models.
But the most powerful large language models out there now are closed-source. The gap between the performance of the open-source and closed-source is decreasing, and that's wonderful. Hopefully, we'll get some parity. Maybe the closed-source models are going to be ahead for a bit, months long maybe, and that's okay. But, it's really exciting what's happening with the open-source models. So much research is going on there that I'm hopeful that they will always be viable alternatives to closed-source models.

John Richards III:
It sounds like, if you're working on getting the right data, you may not need so much overhead in the computational side, if you've got stuff in the right situation to begin with. So that one of these other models that's maybe not quite as powerful, but you've got the right data, can really deliver what you're looking for.

Thomas Johnson:
Yeah. There's the whole training phase, which is very expensive. The inference side, which is just using a trained model is a lot less expensive, and there's so many great tricks to quantization to basically turn these large numbers, instead of 64 bit floating-point values, to use 4 bits. It's incredible that the performance can be so good. With these quantized models, you're going to see wonderful research so that the inference side will be even cheaper to run the inference side. You'll probably see the best large language model performance on mobile phones in the not so distant future. Or on embedded devices, there's a lot of really interesting work. So it's just going to make this technology more accessible, and you'll see it more and more places.

John Richards III:
I love it. Well, thank you Tom for coming on this episode. I appreciate it. Before we wrap up, I'd love to hear... I know Multiplayer just had a new feature release, so maybe you can share a little bit about what that was, and how folks can learn more about Multiplayer before we wrap up here.

Thomas Johnson:
Sure. Go to multiplayer.app. Again, we are a company that offers a SaaS platform for teams that work on distributed software. It's really hard to communicate. It's really hard to know what your system architecture is. It's hard to change things without breaking stuff. We have platform version control. We have diagrams, sketches, visualizations, API designers, all this great stuff. We're about to launch a feature called Radar, which will use OpenTelemetry to help keep track of all your system architecture and interesting things. And, a feature called Pulsar soon, where you'll be able to create a platform from scratch and launch it with a single click. So, check us out.

John Richards III:
Awesome. Thank you, Tom. Folks, go check it out. It's really cool what they're doing over there at multiplayer.app. You can find that in the show notes. Tom, thank you for coming on. I appreciate it. It's been wonderful getting to talk to you here today.

Thomas Johnson:
Thanks for having me.

John Richards III:
This podcast is made possible by Paladin Cloud, an AI-powered prioritization engine for cloud security. DevOps and security teams often struggle under the massive amount of notifications they receive. Reduce alert fatigue with Paladin Cloud. Using generative AI, our model risk scores and correlates findings across your existing tools, empowering teams to identify, prioritize, and remediate the most important security risks.
If you'd like to know more, visit paladincloud.io. Thank you for tuning in to Cyber Sentries. I'm your host, John Richards. This has been a production of TruStory FM. Audio Engineering by Andy Nelson, music by Amit Saghi. You can find all the links in the show notes. We appreciate you downloading and listening to this show. We're just starting out, so please leave a like and review. It helps us to get the word out. We'll be back April 10th, right here on Cyber Sentries.