IIIMPACT is a Product UX Design and Development Strategy Consulting Agency.
We emphasize strategic planning, intuitive UX design, and better collaboration between business, design to development. By integrating best practices with our clients, we not only speed up market entry but also enhance the overall quality of software products. We help our clients launch better products, faster.
We explore topics about product, strategy, design and development. Hear stories and learnings on how our experienced team has helped launch 100s of software products in almost every industry vertical.
And you're talking hundreds of documentations that's coming into companies. They're trying to have their staff analyze so much documentation that's coming in. I mean, it just becomes very daunting or you have to hire so many people to do that, but AI should help streamline that process.
Speaker 2:That's what AI is great at. It can sort of take any problem that you give it and sort of reason itself around any issues like that. So you have to keep tweaking it every day to go to work or sort of, you know, have your staff work on really boring sort of data sort of analysis that is just no one really wants to work on anyway.
Speaker 3:Yeah. RML and GPT, are they 1 and the same? And I guess the the short answer is yes and no.
Speaker 1:Hello, everybody. Welcome back to another episode of Make an Impact podcast. I'm your host, Makoto Kern. I have here, as usual, suspects, my integration team, Brinley and Joe.
Speaker 3:Hello. Hello.
Speaker 2:Hey. Hey, guys.
Speaker 3:Good to be back.
Speaker 1:Yeah. Alright. So today, on today's episode, we will be discussing if ML machine learning and GP models are 1 and the same. Yeah. So it's
Speaker 3:a pretty technical topic. Yeah. Yeah. But but one that I'd be keen to kind of hear the sort of not assumptions, but the experience that we've had with it because my experience with machine learning is, you know, it's kind of built one sort of construct and then dealing more and more with AI and and especially GPT models, you do kind of go, well, you know, are ML and GPT, are they 1 and the same? And I guess the the short answer is yes and no.
Speaker 3:So we'll kind of dive into that and unpack some of the the different areas of machine learning and GPT. So, really, I found kind of those terms are used interchangeably by a lot of people to be like, okay. Well, we've got to do machine learning. And they're like, okay. Are you talking about GPT, or, you know, what are you talking about?
Speaker 3:So that kind of caused a bit of confusion about their differences. And, really, in summary, I guess, both for under the broader umbrella of artificial intelligence. However, they they really serve distinct roles. So if we look at GPT models and you think of OpenAI's GPT 4, all the the release of Claude, Google's Gemini, they kind of represent a specific subset of machine learning designed to process and generate human like language. Whereas if you look at something like traditional machine learning, it really encompasses a wide range of techniques, which are mainly kind of focused on pattern recognition and prediction across various data types.
Speaker 3:So you kind of see that there's overlap, and, again, we'll kind of explore that. So the misconception arises really when these GPT models are viewed as the entirety of machine learning rather than really one powerful example of its application in, you know, natural language processing. I think just by understanding this distinction, it's crucial for kind of accurately grasping how AI innovations like things like Gemini, you know, Claude, they fit into the broader world of machine learning. So I think to best articulate the differences, we can look at a few areas, things like, you know, the general definition, the type of learning, the purpose and output data in training, usage and applications, and lastly, flexibility. Because I think just just doing this exercise, you know, I haven't we haven't dealt with all types of machine learning.
Speaker 3:It's interesting to to see the overlap and the differences just by looking at these kind of various areas.
Speaker 2:So out of interest, this is my assumption. And then tell me if I'm correct or not based on what you what you're reading through there. So machine learning is the practice of creating models using data, transformers, whether you need algorithms, which actually creates a model at the end of it. You know, that's the application of machine learning. That's what machine learning is.
Speaker 2:Your output is always generally a model that you have at the end. And Chatchipt, that model is really just a subset of machine learning. Right? It has gone through the same process. Chatchipt is really a model that was also built by machine learning.
Speaker 2:Is that correct? Is that right way of saying it too?
Speaker 3:Well, I I think so from from my understanding. And, again, there's so many different applications, but it's also I think what we'll touch on is is also the pretraining that's different. So, you know, usually, you have your training dataset, whereas GPT, you really are looking at, well,
Speaker 2:this is
Speaker 3:pretrained on a certain dataset. So let's look at the kind of definition that I think that touches on those.
Speaker 2:Yeah. So it's kind of more the way of traditional usage of machine, and you need to get a model at the end of it. But how that's built compared to how GPT is built or a model like GPT is built are very different. Is that kind of where we're going with this? Just the Yeah.
Speaker 3:I think yeah. Exactly. And I think, again, those differ in the various sections that we'll look at because there is the sort of broad definition. Mhmm.
Speaker 1:I'm assume I'm assuming, to get even more kind of nerdier and technical, when you get into machine learning, like, are they using specific, like, neural network patterns and how they're creating it? Or fuzzy even fuzzy logic can be considered machine learning because you've got a a certain model that you're training for, and like you said, it's a very specific thing that you are training, like fuzzy logic, if you're automatically adjusting, like, a temperature gauge or whatever. It's not necessarily neural networks, but it's fuzzy logic, where it's, like, you're just trying to get a close enough temperature setting and have it automatically detect and adjust itself?
Speaker 3:Yeah. It is. And that's probably worth saying. So if we look at machine learning and its definition, so it's saying a broad field, and I think that's what's important is it is very broad and it all depending on the application. So a broad field within artificial intelligence that focuses on creating systems that can learn from data.
Speaker 3:So that's sort of things like developing algorithms that analyze input data, learn from it, make decisions or predictions based on that new data. So you can think of things like image classification, you know, to train to identify within images containing a cat or a dog, or fraud detection, like looking through bank transactions and kind of going, okay. These look like fraudulent transactions. So varied, but obviously creating the system that learned from the data, whereas you got GPT models or generative pretrained transformers. So that's a specific subset of machine learning, and that's obviously what we've talked about before, the large language models, and that's using neural network architecture.
Speaker 3:And that's where, obviously, the transformer comes in, and they're trained on vast amounts of text data to understand and generate that human like language. So there's that sort of generating aspect as well. Yeah.
Speaker 1:And you're probably I wonder if, you know, with and to get into, like, maybe Teslas and how they have self driving cars where it's a different model because they they've got tons of data from a video standpoint. Mhmm. But then you can throw that car into any kind of situation, and it learns on the fly kind of instead of having preprogram set of data. So it's, like you said, it's more, I guess, generative versus just straight kind of feedback and trying to learn by teaching it millions of different models or billions.
Speaker 3:Yeah. It's a good point. I mean, I think there's so and there's so many different sets within that as well. I mean, you're looking at that where it would be machine learning because it's gonna take all those, you know, the patent recognition and things like that and apply it to, you know, the video feed that's coming in. But it has to have gone through that machine learning process of a car, a truck, you know, a bicycle, you know, and scenarios, you know, beforehand.
Speaker 3:So it's maybe not, in my understanding at least, not generating the same as the transformer kind of models that are actually creating new. It's more analyzing again because it's still data that's that's feeding in something like a self driving car and going, okay. Identified that, identified that, do that. So those are the basic definitions, but then we also look at the next area, which is the type of learning. So, you know, looking at machine learning, you've got models that typically train for specific tasks.
Speaker 3:So that's things like classification of data or regression or clustering. Some examples of, like, house price production or supervised learning, where a machine learning model can be sort of trained to predict the price of houses based on features. So it looks at, like, okay. What's the location? What's the size?
Speaker 3:What's the number of bedrooms? Or something like customer segmentation where it's unsupervised learning, where retailers can use unsupervised learning to, test to customers based on their purchasing habits or kind of enabling things like personalized marketing. So very much, I guess, looking at those specific tasks and again, going back
Speaker 2:to GPS. Narrow objective. Yes.
Speaker 3:That's it.
Speaker 2:That does, yeah, for a very specific task.
Speaker 3:Yeah. And then while that can be supervised and unsupervised, you look at the very much unsupervised nature of GPT models where it's that unsupervised learning during, again, the pretraining phase. So yeah. And that's with the goal of really learning to predict the next word in in large amounts of of text. And that's where, again, I think, you know, if you check out our podcast on emergent behavior, that is you know, that's something that's come from this is what come from this unsupervised learning where it's been given one task and it does something It does a whole lot of other things that were unexpected.
Speaker 3:So, again, looking at Yeah.
Speaker 2:Also
Speaker 3:use cases.
Speaker 2:Yeah. Yeah. Sorry. I was just gonna say too. The great thing about the GPT model is that it's almost because it's so general, it can be used for a wide application of things, not just that narrow specific pinpoint task that, you know, stat machine and models have been made for.
Speaker 2:But what's also great about fine tuning is fine tuning is obviously the process of taking an existing model and then adding more specific data and working with it in a different way or or sort of fine tuning it to a more specific task. You can take, like, something like a GPT model, and you can obviously just prompt it in certain ways to get to what you want to do. But if you really want to go to the next level, you can fine tune it and provide it even more specific and public information that doesn't really know, like, your own company information or your own personal information. So fine tune it that way to get it to specialize maybe in your company's knowledge or your company's domain knowledge as an option. There's loads of different options around that.
Speaker 2:But, yeah, that's what's great about them too.
Speaker 1:Well, that's I assume there's a difference oh, sorry. I assume there's a difference too between video or image processing versus video generation, like mid journey and all that. That's more GPT versus
Speaker 2:that's more machine learning, I'd almost imagine. Right, Bryn? If you think about it. Like, that something like creating video sorry. That's you think it's a mix.
Speaker 3:Well, not a mix, Paul. It it's imagine that it has to. So you look at some of the transformer, the the video kind of transforming technology, that doesn't have enough training data on things like hands or walking cycles. So it's very much about the pretraining data that makes it so it's doing a transformer, and I suppose it's more GPT in terms of it needs that pretraining and then it generates. So it is flexible.
Speaker 3:And kind of what you were talking about, Joe, you kind of read almost into the next section, I think, which is great, about purpose and output, you know, where you can say, well, machine learning models are, you know, task specific, meaning that their architecture and training are designed to solve a particular problem. You know, that output depends on the specific application, so whether it's, like, spam, email detection, or disease diagnosis. And, again, getting back to GPT, as you were saying, Joe, completely versatile in terms of their output. And I think that's where we can see things like video generators to be similarly versatile in that they can generate a lot of different outputs based on all that pre training data.
Speaker 1:Yes. Generating more more like a new a new thing, a new output versus just analyzing what's there. Yeah.
Speaker 3:Yeah. Exactly.
Speaker 2:So Yeah. Just some interest looking at Soarer just sort of just because I was curious myself looking at Soarer. That does seem to be based on a machine learning based system. It's not really built off an existing GPT model. It's not fine too much.
Speaker 2:Interesting. It specifically uses a diffusion model combined with transform architectures. So it's very much, you know, obviously, again
Speaker 3:Well, the transformer architecture would be Yeah. Would be very much Yeah.
Speaker 2:GPT models are made very similar. Yeah. But, it's not it's not like it's taken from a GPT full model and fine tuned or it doesn't Yeah. Doesn't rely on a GPT model to operate. It's definitely its own thing using very similar methods of trading it, but
Speaker 3:yeah. That's interesting. So the next section we can look at is also the data and training. So machine learning really requiring structured datasets, whether they're labeled or unlabeled. So you can think of things like tabular data, images, time series.
Speaker 3:They're trained on specific task relevant datasets. And, again, like all other things like stock price prediction, image recognition, you know, you've got that structured database. Whereas when you're looking at GPT, it's trained on a huge dataset of unstructured text. So whether it's books, websites, articles, it's gotta find and learn those patterns in that language and the grammar, even the facts to really output what it does and, you know, do a lot of the things like being able to answer the questions and, you know, that we ask it, filling in blanks of, you know, even incomplete sentences or paragraphs. It is quite different there just on that sort of data and the way it's trained.
Speaker 3:Then looking at the second last section, which was usage and application, so machine learning is used in obviously a wide array of industries and fields, and we can think of that you know, exactly what you're saying, Ricardo, things like self driving cars, you know, that machine learning models that process that camera and sensor data to detect objects and make real time decisions for safe driving. Whereas GPT, it's specifically used for natural language processing tasks. So, again, typical use cases, the chatbots or the content creation, that sort of thing. And then last of all, we have flexibility. So this is sort of looking at machine learning models that are coming back to what we've covered already, highly specialized and tailored for those narrow tasks, whereas the GPT are just much more flexible, adaptable, and they're multiple text based and even image based tasks now.
Speaker 3:So I think that's where it's interesting there. There's certain similarities, but it does seem that the common thread is that machine learning is a lot more specific and scoped and, you know, using structured data. And GPT is a lot more taking that unstructured data and being able to actually create from it, not just analyze. So
Speaker 2:Yeah. Very different use cases. Yeah.
Speaker 3:So I thought it was pretty pretty fascinating. Kind of summarizing with machine learning is is really the sort of broad category that includes the various techniques for training models to perform these specific tasks. Now while the the GPT model is already this subset of machine learning models specifically designed for, I guess, understanding and generating natural language with the sort of focus on on outputting things that are more versatile.
Speaker 2:The the great thing is, you know, as we see how models are being used in multi agents, you don't have to choose 1 or the other. You can choose the GPT model as your interaction with the more fine tuned models behind it, and the agents can talk to each other. And one can be specialized at finding you that exact image that you wanted and reply responded back to GPT model that gives it back to you, and you can say, oh, no. That's what I not what I wanted. I wanted a red hat on the reindeer.
Speaker 2:And then, you know, you'll send it back to model, configure it, like, you know, prompt it in a certain way, and actually be able to work with that agent a lot more intelligently than you could even start with that, sort of more fine tuned machine learning model. And, yeah, it's just really interesting to see where it's gonna go with that. It's all the models talking to, like, to each other, and there's gonna be more popular models coming out, very much specific to these tasks like SOR would become, you know, the video model, and that's what everyone uses. And it once becomes like a foundational model for video sort of generation going forward. All the other services out there are just using it as a service in a multi agent approach to, like, you know, generate that, you know, if you know, video that you wanted and
Speaker 3:Mhmm.
Speaker 2:Give that back to you. Another one generates the images for your presentation. Another one generates a text and or working together at their own little task. Gadgets. Yeah.
Speaker 2:Yeah. Like, one o
Speaker 3:with with reasoning as well and go off and go, okay.
Speaker 2:This is your this
Speaker 3:is your scientist, and this is your Mhmm. Biologist or, you know, whatever. I I don't know. You know, whatever kind of function you you want. Mhmm.
Speaker 3:It's, yeah, it it's gonna be pretty exciting.
Speaker 1:Yeah. I think this is probably an important point too just within what we do here at Impact is trying to identify that within the organizations. You know? Where can processes be automated and made more efficient and what tools to integrate, you know, whether that is a machine learning tool. Is it more of a specific AI tool that's been created out there, or is it something that we help customize and build specific AI agents to help companies, you know, make them more efficient and then not cost so much either to build it and see ROI pretty pretty quickly?
Speaker 3:Mhmm. Yeah. Agreed. Yeah.
Speaker 2:The model as a service approach is just becoming so amazing with that. And our specialty and understanding which are the good models to use, not just in terms of accuracy, but to the cost effectiveness, yeah, I think it's a huge help to he needs that because when you try and look, you know, okay, I need something to do document classification for me. Where do you start? There's just a 1,000,000 options out there. But knowing what our experience has been in leveraging certain models and what we see in the results are from those and the services around that, it's not like you go and have to build your own models and your own systems to actually do this.
Speaker 2:There are great services out there, but they are alert, and it's, yeah, great to talk to people like us because, yeah, we've just gone through all of this already. We had some really great results using some of these and just really great results integrating them into existing systems that you're using. It doesn't have to be something stand alone that's outside of your existing software stacks, you know, really easy to integrate these services in, which we've been doing a lot of recently too. So it's just become it's just become great. It's it's just I can't.
Speaker 2:I'm always this is every single problem. There's always a solution for it with AI at the moment, and this is, yeah, really fun too and just exciting to start implementing those.
Speaker 1:Yeah. And, I mean, we're seeing this across almost every industry where it's being disruptive, or it's very helpful in a positive way being disruptive where, you know, you've had clients where they're in the kind of the legal space where they're trying to optimize documentation, hundreds of pages within documentation to analyze. Maybe it has to do with mortgages or banks, and there's legal legalese that they have to identify and make sure that all the things are correct within the paperwork that has to be signed, and you're talking Mhmm. Hundreds of documentations that's coming into companies that we did for this one where they're trying to have their staff analyze so much documentation that's coming in. It just becomes very daunting, or you have to hire so many people to do that, but AI should help streamline that process.
Speaker 1:But then Yeah. We've got, what, energy companies that we're helping to cybersecurity companies as well integrate that AI and make it more user friendly.
Speaker 2:Absolutely. And it's all often really mundane tasks too that AI is just great at doing. I mean, even sort of humans doing that. They're looking through 10,000 documents a day, trying to, like, pinpoint errors within them. It's just not feasible, and even some of the software that's traditionally been out there to do that, that's been tuned to do that.
Speaker 2:You know, it can only work in a very narrow scope. That's what AI is great at. It can sort of take any problem that you give it and sort of reason itself around any issues like that. You have to keep tweaking it every day to go to work or or sort of, you know, have your staff work on really boring sort of data sort of analysis that is just no one really wants to work on anyway, get them to work on more exciting things that, you know, can actually provide much more benefit to your company rather than, yeah, just data. So it's great.
Speaker 3:That is the interesting thing. I mean, you don't often think of employee satisfaction as being part of the value adds when you actually, you know, integrate AI, but it's definitely one of them. You know, making employees much more efficient and, you know, all the, as you said, mundane tasks are taken care of so they can really focus on the more satisfying elements of work.
Speaker 2:Yeah. Exactly.
Speaker 1:Cool. I think this is a good time to wrap it up for this podcast episode. Thanks again for everybody for listening in. Hit that like and subscribe button, and talk to you next time. Take care, everybody.
Speaker 3:See you, everyone.
Speaker 2:Thanks. Bye. See you.