A set of AI courses and news items from AI development agency SevenLab.dev
Hey. Welcome, everyone. I'm Bas, and this is my colleague, Khun. And we're both founders of AI development agency, Seminar, located in the heart of Amsterdam. And today, we're excited to talk to you about a technology that's revolutionizing business around the world.
Bas Alderding:And in particular, that's artificial intelligence or AI. And in the next few minutes, we'll explore what AI is, how it works, and most importantly, how it's transforming various businesses and industries. And from automating routine tasks to providing deep insights from complex data, AI is already reshaping how we do business currently, and we'll do that even more in the future. And we'll show you a few real world examples that, we have been working on and discuss the potential impacts, both positive and challenging, that AI is bringing to the table. And, yeah, let's dive into it and see how, AI is not just the future of business, but also very much part of the present.
Bas Alderding:So, yeah. I'll share my screen. Koen, do you have anything to say before we start the whole training?
Koen ter Velde:No. Not necessarily. No.
Bas Alderding:Okay. Then we'll dive right into it.
Koen ter Velde:Let's go.
Bas Alderding:Well, we'll start off with a quick outline of the things we're about to discuss. So first of all, we'll do a quick recap of what we do at 7Lab, and, yeah, why we are the people to talk about this, and, I think as a software company, we're in the unique position that we can also show you how this is currently how this technology AI is currently being applied in various business processes, and industries. Then we'll dive into some theory, with Kun. He will explain us, yeah, the underlying mechanisms of AI in general, and, we'll dive into a couple of models that are currently available on the market. Then we have some examples for you, so concrete business cases of how this can be applied, in your own organization.
Bas Alderding:And then we'll see what you can do as a company, or business owner to, start with AI. So that's it. Well, first of all, a quick introduction about 7Lab. We have been developing software for 16 years already, and the last 2 years have been mainly focused on AI. And these are AI applications in particular.
Bas Alderding:So we're not really developing new models. There are currently 1,000,000,000 being invested in developing new models. But we use these models, to bring that, technology to end users. So we built, applications you can interact with. These can be chat applications, web applications, or just AI agents that act, in the back end of a certain process.
Bas Alderding:Yeah. That that's our specialty, for the over the last 16 years. And, yeah, we're applying that to this new technology to bring it to the end users. Yeah. Like, let's see where AI can help before we start all the theory.
Bas Alderding:We want to know which business processes could be, where it could be applied beneficially. So what we see in practice is, that AI or Gen AI is being applied in, these 5 different aspects of a business. So first of all, we see a lot of, opportunities in the service and support business, and we'll dive into it a little bit later how this can be applied exactly. But what we also see in a lot of inspection work or control work, there's also a lot of opportunity to automate things with a We have the data processing and analysis, part of a business where this technology can really help. We have knowledge management and content creation.
Bas Alderding:So when you're looking to apply AI in your organization, you should mainly look at these pillars because the most important business cases are located in these areas of the business. And we'll show you a little bit later in the presentation how we have done this for several clients to give you an idea and some inspiration of how to do this on your own. K. Let let's dive into a concrete example in that case before we dive into all the theory. So I just talked about the knowledge management process, that's, relevant in a lot of organizations.
Bas Alderding:And, I want to show you one quick use case here that we recently did for a customer MoveBeyond, a big IT strategy consultancy organization. And they had this problem that, they had all these consultants with a lot of knowledge about, their IT strategy. And, it was kind of hard for them to share that knowledge between groups of consultants. And then we had a quick discussion with them and we ended up with the idea to basically build 2 AI agents. 1 AI agent, we, built specifically to gather knowledge from employees within the existing communication channels they're already using.
Bas Alderding:So they were using Microsoft Teams on the workflow, to communicate between consultants and within the organization. And we built an AI agent that, put ask the consultants to share a little bit of what they're doing currently at a client. And, the AI agent has been instructed to ask as many questions as needed before, it deems that it has enough information to actually save the knowledge in a common knowledge base. And then on the other hand, we instructed another AI where users can ask questions about the knowledge that has been built by all these consultants together, within the entire organization to, to to to get the information they need from that knowledge base. So they, need to handle a specific case, for example, at a specific client, And then they can ask the AI a question, and it will use all its built knowledge to provide an answer to that question.
Bas Alderding:So when, for instance, a consultant shared a best practice about crawling out a particular system, like, let's say, an ERP system, at one of their clients, then the AI would know about past use cases and use that context to give them an answer about the best practice right away. So, yeah, that's that's an example upfront. And now we'll dive into the theory with my colleague, Koen. So hit it off.
Koen ter Velde:Yeah. Let's go. Yeah. Let's just first start with a quick look at the the status quo of AI right now. There's a lot of talk about AI in the news.
Koen ter Velde:Of course, there's quite a hype cycle. Some people think AI is the problem solver for all their problems. And other people think AI is bullshit. So if you look at this overview, right now AI is in the face of narrow AI, it's the lowest level of AI, AI is currently capable of, handling real specific tasks in a process, but it's not able to take this knowledge of this process to another domain. If you look at all the investments done by the bigger companies, they are all aiming to achieve the general AI stage, in which an AI model is capable of taking this knowledge of a specific task.
Koen ter Velde:So for example, the example Bas gave about knowledge gathering, to take this, task to another domain and apply the same, specifics there. So once we reach the level of general AI, or the models can operate more on their own and do things in different settings at once. And then the 3rd level is super AI, which is a far stretch goal in the in the far future, where AI will be able to, for example, run an entire company based on all the data and systems linked to it. So that's something for the future future. But right now, we're still in a narrow AI phase.
Koen ter Velde:But we're are approaching the general AI phase.
Bas Alderding:Yeah. May maybe a quick question about that, Koen. Yeah. So, of course, people might know chat gpt, and we'll get into that, a little bit later, in the presentation. But isn't that already using part of its domain, knowledge, into other industries than you were used to?
Bas Alderding:Like, isn't it already transferring from one domain to another in in in a certain way?
Koen ter Velde:No. I don't think so. If if you talk to chat gpt, you're talking, towards a large data set of of knowledge. But chat GPT isn't capable yet to perform a specific task you you you instructed to apply it on its own on a different domain. So I I think it's still really focused.
Koen ter Velde:If if you look at JetGPT in particular, it's really focused in in answering questions you you give it. But if you use this model behind jetgpt jetgpt4 o, for example, it's not capable of making, its own, plan to achieve a certain goal, not at the level of a general AI stage would be.
Bas Alderding:Like a general AI could do everything you mean?
Koen ter Velde:No. Not everything. That's more the the the the super AI. It it it's a combination of creativity and intelligence. So at at a certain point, these models get smarter than the smartest people on, domain topics.
Koen ter Velde:But there's also creativity needed to come up with new solutions for problems. I think the combination of the 2 makes general AI or super AI. So right now, the models are getting close to the smartest humans on on the domain specific topics. But I think when you have a model that's smarter on all domains, of knowledge, and the creativity, then it can come up with new solutions, which we as humans aren't able to come up with.
Bas Alderding:Yeah. Okay. Thanks.
Koen ter Velde:Yeah, when you look at AI in general, because a lot of people talk about AI, but they don't really know what AI is, if you look at AI, AI is a field of study, which has been running since 19. It started with making machines compete in in in certain simple games, for example. And since the 19 eighties, machine learning came to machine learning is a a set of models in artificial intelligence, which can learn from data. We'll tell tell a bit more about that later. And since 2010, deep learning was founded and deep learning is a subset of machine learning.
Koen ter Velde:And it uses multiple layers of processing data to learn from. So if you go to the next slide, to talk a bit more about machine learning, like I said, machine learning is about machines that can learn from data. We've been building software for for 16 years now. And in the past, when we needed to, process certain information, for example, analyze information, we would write a lot of rules to, process this information to check for certain specific, details in this information. Nowadays, with machine learning, you can train a machine learning model based on a, data set, a historical data set, to use it to predict future values.
Koen ter Velde:So if you go to the next slide to get some examples, you could, for example, train this machine learning model on a lot of data about, the sun, the winds, and also the pricing of energy electricity, to predict future value. So you train this model on the combination of sun, wind, and electricity. So it's it's getting it's learning the correlation between the two. And then historical data, right? Yeah, historical data.
Koen ter Velde:And when this model is trained, you can then put in, for example, predictions or actuals, and then use this model to predict electricity price for that certain point. The same techniques can also be used for, pictures. So for example, x-ray imagery can be used to predict certain diseases. You can look at gene data, but also the the spam labeling in your email inbox. It's also based on on machine learning.
Koen ter Velde:So there's some machine learning model that's trained on a lot of emails, which are labeled spam or not spam. And using this machine learning model nowadays, we can mark emails as spam or not automatically. And the funny thing is that this machine learning model can be retrained constantly to make it better and better and better. Let's go to the next slide. Because if you look at deep learning, deep learning is a newer form of AI.
Koen ter Velde:It's a subset of machine learning. And with deep learning, like I said before, they use multiple layers of neurons. It's a neural network to process information. So if you train a machine learning model on historical data, you need to tell this model, where to look for. So please look for the combination between, winds and sun in combination with the price of electricity, and then the model will learn.
Koen ter Velde:With deep learning, there are a lot of layers in this, learning process, and each neuron in these layers looks for certain, specific of this dataset. And it will trigger, for example, on the color red. And it will trigger, from 0 to 1, where one is really red and 0 is not red. And all these neurons have their own specific, to look for. And when you train this model, it starts to recognize patterns in the data without instructions upfront.
Koen ter Velde:So if you go one slide further, if you compare the 2 different technologies, machine learning, which is older, it's easier to train a machine learning model. You don't need that much data, but there are accuracy plateaus when training these models. Also, a funny thing to note is these machine learning models are trained on CPUs, so the processor of your computer, while deep learning models are trained on the GPU, the graphic processor of your computer. And that's why the stocks of NVIDIA, which is a GPU manufacturer, are skyrocketing right now because all the big companies, the tech companies, they need to buy these GPUs to train their models. And with deep learning, there is an unlimited level of accuracy, but you need a lot of data to to train these layers in a model.
Koen ter Velde:But it's able to relationships in the data on its own without instructions upfront. Then if we look at the models, there are different modalities used in models. So if you look at a model, you can put in, pictures, text, audio, video, and also the output can be pictures, text, audio, video, etcetera. So you have uni models that are trained on one specific modality. So, for example, only pictures.
Koen ter Velde:So you can put a picture in and there comes a picture out, and you have multimodal models. At g p t, for example, g p t four o is a good example of a multimodal model where you can put in a picture and, for example, ask what's on this picture. Or you can put in text and ask, hey. Generate a picture of a horse, for example. And that's also a distinction in the models that are developed right now.
Koen ter Velde:Some models are really focusing on a really specific modality and also a specific application. So they become really good in generating pictures, for example, for medical use. And other models are really focusing on this multimodal, approach, which are the the the bigger models that are out there right now. So gpt4 o, again, as an example, is a good, multimodal approach. It's one of the the biggest model models that's, out there right now.
Bas Alderding:Yeah. May maybe it's also good to note. Right? So when you're looking at this presentation, you should get just a generalized view of the the AI space, but we're mentioning models here that are probably outdates, like, next week. For instance, we had already a lot of news this week, like, Meta from Llama releasing their new models with also Phishing capabilities.
Bas Alderding:We have Gemini from Google. There are a lot of these models out there, and they're constantly evolving. So I think that's important to note here as well. And I think in gen in a general sense, most of these companies are trying to build multi modal comp, models. And I think that's because then it can answer almost any question.
Bas Alderding:Right? You can put anything into it, and it will be able to answer you about that object, that that text, that audio, that video. And the more types of data it can handle, the more it can be placed inside, like, your process because, you should think of it like a human being. Right? That we also have multiple capabilities.
Bas Alderding:We cannot only only understand audio. We can also understand video from our eyes. And and, yeah, they're, of course, they're trying to replicate this into an AI. So Yeah. A little bit of context there.
Koen ter Velde:Yeah. And and I think it's also the combination of, if you implement AI, and AI can operate on its own. You want to make this implementation as easy as possible, and that will mean that it would act as a human. So that's why the figure one, for example, which is a humanoid robot, is also really scaling up their investments and development right now. Because if you have a robot with AI brains, which can operate in normal life, it can take the bus, it can handle a machine, for example, it's far easier to implement than you than when you need to develop new, machines because they can't be operated by AI on its own.
Bas Alderding:Yeah. Or or when they did it in the past, they were training, like, machine learning models that were only capable of executing certain tasks, you know. So Yeah. Either recognizing an object or a set of objects and then a separate model to calculate, or predict which movements to make, for instance.
Koen ter Velde:And if
Bas Alderding:you have everything in one model, yeah, you you already have a great start to to build something in the physical world.
Koen ter Velde:Yeah. Yeah. Maybe we should go on with the AI labs. Like you said, boss, there's a lot of development happening in the deep learning scene of AI. Of course, you have meta the the company behind Facebook, Google, Entropic, Microsoft, OpenAI.
Koen ter Velde:They're all big labs that develop models. You also have Mistral. Mistral is a France, French camp company that's also building their models. And there's also China, of course. You have Baidu, and Baidu is one of the the bigger companies there.
Koen ter Velde:And to give a bit of perspective, at OpenAI right now, there are working, I think, 1200, 1300 people. At Baidu, at the AI division, 12,000 people are working right now. So the scale of their focus on AI in, in China is not comparable to, to the western world. And if you if you look at ways to improve the results of your AI application, it all starts with data. So the the the the the saying garbage in garbage out also applies here.
Koen ter Velde:If the data you hand over to your AI model isn't correct or wrong structured, then the output would also be less less good as you want. The same goes for the technologies you use to set this up. A lot of, people we talk to, say, for example, hey. I'm using chat gpt right now to generate text or to answer questions, which is a good starting point to learn how this technology works and to understand, okay, what's what's going well, what's what's needs improvement. But if you set up the right combination of technology, in the sense that you have the right data links, you combine this with the right model with the right settings, you can really improve the results you get from your application.
Koen ter Velde:And the same goes for the prompts. So the prompt is some sort of persona of the AI model you're using. So you tell this AI model you are a sales representative. You work for this company. Your goal is to reach, x I zed.
Koen ter Velde:You're you need to, apply to these rules. So this this prompt is really determining how, creative the the model can be and also which rules it needs to follow. So it's also some sort of guardrailing the the output process. And if you look at all the projects we do, I think the first three, so so data technology and prompting, that's the main focus we have in each project.
Bas Alderding:And and and maybe, in regards to the data part, like, unlike the traditional AI, we are we are talking specifically here about, Gen AI applications and improving results in Gen AI.
Koen ter Velde:Yeah.
Bas Alderding:And where, historically for machine learning models, you needed really well structured data that's now less relevant. For instance, you can use, like, a dump of text to, to to train a model. And, in in the past, you needed, basically, an Excel sheet or a database with structured information. So in that regards, yeah, you can train on unstructured data. But still, like Koen said, if there are biases in the the that data or errors in that data, of course, the model will try to to to find correlations within that that data set you're using to train, and then also output the wrong answers.
Bas Alderding:And and the different providers that we saw in the the slide before this are taking also different approaches to this data parts. I think all the Gen AI companies that are building their models in the in the first place, they try to scrape as many data from the Internet and then train their models with that and let the neural networks or the the deep learning networks figure out the patterns. But, you also have different companies that are, sometimes also, coming from or different models that are coming from the big tech guys like Microsoft. You have, for instance, the the phi three models or the phi models that are mainly focused on super high quality, data. It'd be unstructured data, but really checked by by everyone.
Bas Alderding:So super high accuracy data that models are being trained with, and they're getting some very good results there with, less data, but higher quality data, and then building small models from that. So they they are taking that route. I think some of the providers that we saw on the previous slide are still, like, looking for the right amount of, bad data and good data. And then yeah. That just just a small nuance there.
Koen ter Velde:Yeah. Yeah. I I think I think to to give another example, Amazon, for example, is training their models also based on, reviews of products on their website, which contains not always the the quality of data you want. Yep. But if you look at the projects we do, the first three steps, so data technology and prompting is the the the main approach to improve results.
Koen ter Velde:And, when you start working with an AI application, you also start with gathering more data. And this data can be used to fine tune a model. And with fine tuning, you take the model you're you you want to use, and you give it a lot of examples of questions the the expense of the model goes down, so it's cheaper to use and also gets faster and the quality gets better. So you're fine tuning this model on this real specific application. So for example, answering, support questions, for your company, for your, SaaS company.
Koen ter Velde:But to do so, you need to have a a data set of at least 100 or 1000, of examples of of good questions and answers. And when you want to take it even a step further, you can look at training your own model. So you take a foundation model, the some sort of base model, and then train it, but then you need a lot of data and also a lot of hardware to facilitate this training process. Then, a a, comparison about using a model available in the cloud compared to a open source model. This picture is a bit outdated.
Koen ter Velde:But still, all the projects we start, we start with using a cloud model because these cloud models are the top models right now that are available. And by using these more, advanced models, we can also gather more, high quality data for fine tuning in a later stage. So compared to a cloud model, you could also decide to use an open source model and host it on your own. There are a lot of different open source models in terms of, how good they are at specific tasks, but also about their size. So like boss, for example, said, llama just launched launched a new version of llama, their their model.
Koen ter Velde:And this model has different sizes. So you can decide to host the the biggest one, but it's also the most expensive to host. But you can also go for a cheaper version if you have a simple application of this model. So in each project, we we we tend to use the the cloud version first. And then in the future, we can use the data, get it with this cloud version to fine tune a open source model and then host it on your own in your own infrastructure.
Bas Alderding:Yeah. I think now then it's time to dive into a couple of examples of how we've applied this whole AI technology that we that we learned about in the in the previous, few slides, how that's being applied in practice. I think right now, since the beginning of this year, which is 2024, we've done about 30 or maybe even more Gen AI projects where we implemented this technology into production processes. And with production, I mean, live processes, that are currently running at companies where AI is assisting people to do different kind of tasks. So when we talk about AI, we can actually talk about 4 different kinds of AI.
Bas Alderding:And in blue, we have the kind of machine learning AI where, it's specialized to execute on one task. And we'll have an example of that later. Then when you look at generative AI, we usually say we, are building application within these three pillars. So either interacting with your content, which means, I have some data or I have some text or some documents or knowledge base where I want to ask questions, about and to get direct answers. And for instance, this can come to to life with a chatbot, where, the chatbot is connected with different tools.
Bas Alderding:Like, it can be existing systems or, like I said, a knowledge base. And then once you ask a question, it's able to look, into those sources and come up with an answer that's relevant for your your query to the model. Then the second pillar is, where we, let an AI model look at some of that data from a certain perspective. So for instance, we can have an AI model acts like an accountant, a legal representative, a, a business analyst, and then give it a certain use case and then ask it to come back with with a logical answer from that perspective. So this also applies in many use cases.
Bas Alderding:We'll give examples later. Then the last one is content creation. So this is, yeah, one of the things that people usually think about when using generative AI. So you ask a question to to chat GPT, for instance, and then it comes back with some content. This is mainly used to generate a letter, generate maybe courses, or course structures, scripts, and and, write documents for you.
Bas Alderding:And this also gets more interesting when you connect it also to your own data. So based on the data or knowledge you've built within your organization, you're producing new content, basically. So in the first category for machine learning, we have an interesting use case, that we've done or, that we've implemented for, a shifting open, which is responsible for e waste management in the Netherlands and a couple of other European countries, where they have the problem that they they had need to handle all these TVs that might contain mercury or not. And then in the past, it was really difficult for them, to determine if if a TV would even have this material inside. And they had a 15 step document, which they were using, when when separating the e waste, or their employees were using when separating the e waste, where they needed to go through before they could even determine, if a TV was, had mercury in it or not.
Bas Alderding:And, what what we built for them in the end is an algorithm that was trained on a lot of photos from, TVs containing mercury. And, then, yeah, the end result was an app where that could give a check mark or or a big red cross when we thought that the FAA would, potentially contain mercury or not. So they went from, a 15 step document and checklist for every TV to an instant scan and result on if a TV would contain mercury or not. So big improvement and also quite some time saved, within the actual process of separating e waste. K.
Bas Alderding:Maybe one question. Yeah.
Koen ter Velde:Sorry. One one question about this case. Because we use machine learning. Why didn't we use, a deep learning model which is capable of processing images?
Bas Alderding:Yeah. That that's a good question, Gun. Nice you ask. Now We we are actually using a machine learning model here because one of the requirements was to have the model run on a device itself. So on a mobile device with very limited hardware capabilities.
Bas Alderding:And we when we were building this application, we didn't have the NICE llama models, for instance, that were released just today that we are recording this presentation, where we had a multimodal generative AI tool where we could pass images to to ask for a conclusion. And also, the small gen AI models are quite limited in, in accuracy, and we needed to, and and they're also used for general applications and not for really specific tasks. So we had a really nice specific task for the model here. We had the requirement to run the model offline, on a mobile device. And, we needed to be able to keep training the model to make it better and better with more and more photos for this specific task.
Bas Alderding:And, in that sense, we chose to work with a machine learning algorithm.
Koen ter Velde:Is it is it then safe to say that in general, machine learning models are on average smaller than a, deep learning model? Smaller in a sense of, the the hardware it needs to run.
Bas Alderding:Yeah. They they take less compute. Yeah. But like I said, recent developments in the Gen AI space, like the developments with the new llama models that are the 3 b models. They are made to be run on on the edge.
Bas Alderding:So on on mobile devices as well. So, yeah, the the better these small board models become, the, maybe eventually, it it will make the machine learning applications obsolete. Like, if they can do calculations really well, objects, detection, and and, answer all those fit official questions very well, then maybe, for some use cases, Gen AI will take over.
Koen ter Velde:Yeah. Interesting.
Bas Alderding:Yeah. This is another interesting use case about creating new content. So we had one, customer called Matic Invo, and they, were creating self help, health plans or treatment plans. And these treatment plans, were given to, doctors, to help their patients. So the doctor comes up with the treatment, and then he needs a document for the patients to read and, to make sure they can do their their self care, once they're no longer in direct contact with the doctor.
Bas Alderding:And we help them automate that whole process with generative AI. So with a minimum amount of context, we can now create a full full fledged, self care advice or plan for a specific client or for specific protocols. And, yeah, this is, of course, saving a lot of time, in in in the actual process because this whole company exists to to make these kind of plans. Then we have AI support on, a a big company we did called for Hoofa. And, with this, project, we did, develop a solution where we were connecting bread baking machines or the machine data with some knowledge about the machines, of how to solve certain problems.
Bas Alderding:So these were basically manuals for specific machines and combine those two sources to provide factory workers with a direct plan of action when the machine was no longer working. So if the machine, gave errors or was just stopping, one of its processes, they would be able to ask the AI agent for a solution instead of a technician. So we saved a lot of time there, and we're also improving the accuracy of the answers given and could also answer in the user's own language. So the the level of the instructions, the the difficulty of the instructions could be customized to the end user, but also literally in their own language. It'd be German or Dutch or English.
Bas Alderding:And then we have a last one. So interesting one to compare to the previous solution. We had the AI model specifically trained to recognize mercury in TVs. But here, we have, taken a different approach for an AI vision model at a large, logistics company called Lungersloth. They have a lot of incoming shipments, about 15 inspectors checking all kinds of shipments.
Bas Alderding:In in particular, they need to check, for beverages. They need to check beverages for either customs or their end customers, and they need to enter about 150 attributes for every incoming shipment. And, what we did is use a Gen AI, a multimodal Gen AI model, where we just feed it an image of, an incoming shipment or a couple of images and then ask it to fill in all those 150 attributes, coming from a logistic expert role. And then we pass all that information directly into their ERP system. So and, this resulted in a mobile app where we actually take those pictures, analyze it with Gen AI, and then pass it on, to the, to the ERP system.
Bas Alderding:K. So now, we've discussed a couple of use cases. Like I mentioned earlier, we've done about 30, 35 these generative AI use cases, and implementations this year. So we've only shown 4 here, but there's a lot more to talk about. But before we go into how you could try to, use this technology within your own business, We would also like to to tell you a little bit about the risks involved and what you should look out for when using this technology yourself.
Bas Alderding:So, yeah. Now off to to Koen again to tell us a little bit about those risks. Sorry, Koen, for, letting you only talk about the theory and and the risks. Not not the most, exciting things.
Koen ter Velde:That's not not not the sexy part of the of the video. Yeah.
Bas Alderding:I think one of the
Koen ter Velde:biggest risks of applying AI is biased models. So these models are trained on really, really large data sets, data sets of texts, information found on the internet. This data is fed into the model to train it. These models are then tested by real life users who fine tune the training process. And also these humans that are used to do this process are also biased, of course.
Koen ter Velde:So the the the danger is that you use a model to, for example, categorize people in a certain process. And this model has a biasness in it, which affects this categorizing of people. So there are certain steps you need to implement in building an AI application to check for this biasness, to review it while you're using the application to prevent this from happening. And one other challenge of using AI is transparency. A lot of implementations of AI feel a bit like black box.
Koen ter Velde:So you send in data and there comes an answer out but you don't know on what this answer is based on you don't know the reasoning of the model. So also on the transparency side of AI implementations, you really need to look at how you can lift up the hood of the of the AI application to see what's happening to see where the reasoning comes from, and to check whether these answers generated are correct or not. And that's also links a bit to the 3rd risk of AI, which is privacy. When you send in data to a model, is this data stored? Is it used to train the model?
Koen ter Velde:When you talk about personal information, you don't want to send personal information to an AI model, which is then stored on their surface to retrain this model. So if you look at the projects we start, we also we always look for the the the biasness of the models we're using and how to test and prevent this. We look at ways to create more transparency. So we can explain where an answer comes from. And we zoom in on the privacy aspect of the application to prevent wrong or unnecessary data sent to the model, which can be stored or not.
Bas Alderding:Yeah. Maybe, the perfect example of how this could, result into something controversial is, what Google did a while ago, where, for instance, you would ask it to generate, an image from German soldiers, in the 2nd World War. And, then it would, like, generate only images of of black people with, Nazi symbols in on the uniform, because it was trained in a certain ways and and and and then in in in a different way than you would expect. So, there was a little bit of culture baked into the model to present as many, yeah, the first set of people within certain contexts. But, like, it has too much bias or or it was looking too much towards these aspects to generate output in in these images.
Bas Alderding:Might be fun to look those images up and, yeah, they're totally absurd, of course. But, yeah, it works both ways. So the model is pre trained with a certain vision of how culture should be And then, that also determines, like, the outcomes you sometimes get from these AI models. And for instance, the the models from Chegg PTE are, trained, in a different way than the models from Elon Musk that we didn't even talk about in this presentation. But Musk is building its its GROC 3 model, at this moment, together with all the data he got from from x.
Bas Alderding:And and we all know the data on x is also, like, in in some cases, it's, a different viewpoints are shared than than how, maybe the the people at OpenAI are thinking about, you know, certain aspects of culture.
Koen ter Velde:So maybe maybe another good example, Amazon, trained a model on a lot of resumes of people to use this model to assess new resumes send in to to analyse your resumes before they did job interviews. But this model was trained on a lot of resumes, mainly from male co workers. So apparently, this model learned that male coworkers were better or preferable compared to female coworkers. So they by by inputting a certain set of data, this model became biased towards male profiles.
Bas Alderding:Yeah. Interesting. So we talked a lot about theory, a lot about, the the use cases, and and what if you want to implement AI within your own processes and organization? This is usually the step we do together with our customers, where we try to do a big brainstorm together with a a group of people about how this can be applied and what the biggest, or or the what what the low hanging fruit is when using Gen AI technology. But you can also do it yourself.
Bas Alderding:So, we would recommend the following steps. So first, assume the role of business analyst or, or or yeah. I don't know. Process analyst within your own organization. And then try to find something within the processes mentioned here that take a lot of time, to, to execute on.
Bas Alderding:So a part of that process. So for instance, when, in in the service and support area, you could think of use cases where we have to answer questions, to, people doing support requests, either by mail or support ticketing system. It might also be not with customers or it can also be with internal employees, like an HR department having to provide a lot of support about common questions that employees are asking. So when you're talking about service and support, those are a couple of examples you could look towards. Data gathering, in field data gathering, sometimes called inspections.
Bas Alderding:So I have people working in the fields gathering all kinds of data. Maybe that's something for AI to assist with. So either analyzing feasible information, like we showed, with the the lunar slough use case that could also be applicable for a construction company, for instance, that needs to update the customer on the status of what he's constructing. We we could, like, show an image to to an AI and have it assess it like an architect or a building or or a construction manager or whatever. Or we could have an AI look at some data from either a machine like we discussed with for Hoofa or, you know, some some data we we are seeing, generated from an external party or, stuff like that.
Bas Alderding:Then the third one is, processes that have to do with data analysis or data processing. So sometimes you have people working in jobs that are getting in a lot of unstructured information, like emails, like documents, that are always in the the in a different format. But we can instruct an AI to extract certain information from those sources and and make structured info from it. So if we tell an AI, you behave like an arbiter of this information and make sure you get all these points out of that information, then we can, of course, store that information within an existing system that we already have. Like, we also explained with LUNDERSTOUT where the information is eventually saved in an ERP system.
Bas Alderding:But on the other way, we could also say, we're doing a little bit of data analysis with AI where we connect the AI with a database. It'd be your CRM system or ERP system or, ticketing system where we want to ask concrete questions about maybe volumes of sales we made, volumes of sales we did compared to last week where we traditionally needed, a data analyst, for instance. We have use cases with knowledge management where we can look at, like, is everything all the knowledge we're building, is that still inside people and is that shared enough between people? So most companies don't have a company wiki that's actively being maintained. But maybe there's an opportunity with AI to gather knowledge from all these employees and then make it accessible for anyone, to be used in in in a similar case or maybe even in a new case, to make that that process more efficient and not reinvent the wheel every time with every customer.
Bas Alderding:And then lastly, we could we could look at, content creation, where we need to use our company context and a certain fixed process, like maybe sending newsletters every week, that need to be personalized or sending, documents that need to be personalized. Can we do something in that area where we see potential time savings if we built a a little generative AI machine that could do this process quicker with less inputs or not even inputs, like do it autonomously? So I would say it's important to identify use cases within those areas first. Of course, there are a lot of other opportunities that you can can look at with with this technology. But once you identify those use cases, it's important to score them in terms of impact.
Bas Alderding:So how much do they impact the business if we do this 5 or 10 times quicker or maybe even even more even quicker than that. Also see which effort it would be for the organization or an external party to build that machine for you. So, of course, we'll first choose the the use cases that have the biggest impact and require the least amount of effort, when determining which use cases we should go for. And then after you've identified that use case, don't make a huge project from that. But first, build a small proof of concept tool or have it built by someone, that will show to all the stakeholders involved into that process what generative AI can do, also to inspire them to to think along of how it can be further improved, but also show them the capabilities so they can come up with more ideas on their own.
Bas Alderding:Or maybe identify data gaps or, which the AI still has data it needs to actually execute on that task, with with higher accuracy. So these would be the steps we recommend to build your own AI solution. I don't know. Koen, do you have anything to add to this list? Or not a best practice to share?
Koen ter Velde:May maybe a bit on the, assessing the effort side of an, AI implementation. As it can be hard if if AI is not your your main topic to assess how hard it is to develop an AI application. And I think if you if you compare it to asking another colleague to perform a certain task, if I ask someone, hey, can you automate? Can you take on this task, and this task is quite specific, and doesn't have a lot of variables in play, which can affect the outcome of the task, then it's that's quite easy to to automate. And if you look at a task or process where you need a lot of variables that affect the task and outcome of the task, it becomes harder to develop certain search an application.
Koen ter Velde:So to to assess the efforts, you could say, okay, the easy task for a human are also easy to implement in terms of technology. And the harder it is for a human to perform, the harder it gets for AI as well. Certainly, Taskware, for example, the the the holy grail of your company, if we can solve this problem, then we are x amount better than all our competitors. Those tasks are harder to, they they take more effort to develop.
Bas Alderding:Yeah. So yeah. The a good note to end with, I think. Maybe, if people are interested in learning more about this, topic, so that's AI and applying it into your business, we'll be producing more of these videos. We plan on doing it weekly or biweekly.
Bas Alderding:I would recommend you to to follow our channels. So subscribe, like our content, and that that will motivate us to do more of this. So, usually, we were kind of working, in in in the background of our projects. But in the future, we'll we'll be sharing concrete examples to show you how certain AI agents are built. We'll keep you up to date on the AI news so you never miss another opportunity on the AI landscape.
Bas Alderding:And this is one of one of our first videos where we take you through the basics, but we'll produce a lot more, of the content I mentioned. So, please follow us. Right, Koen?
Koen ter Velde:Yes. Yes. Follow and subscribe.
Bas Alderding:Yeah. Okay. Until next time.
Koen ter Velde:Yes. See you later.