"The Edge AIsle" brings you to the forefront of artificial intelligence and edge computing, powered by Hailo.ai. In this podcast, we explore how edge AI is reshaping industries from smart cities and intelligent video analytics to autonomous vehicles and retail innovation. Join industry experts and thought leaders as they share insights into how Hailo’s AI processors are transforming the way industries function, enabling real-time deep learning and AI-driven applications directly on edge devices.
This isn’t just a podcast about technology—it's about how AI is empowering industries and improving lives across the globe. Whether it’s smart cameras making our cities safer or AI accelerators driving innovation in autonomous vehicles, the possibilities are endless.
If you're passionate about the nuts and bolts of AI processors and how they integrate with edge platforms to deliver unparalleled performance, "The Edge AIsle" is the podcast for you. Expect detailed analysis and a peek behind the curtain at the future of edge computing AI.
00:00:00:04 - 00:00:12:22
Unknown
Welcome to another Halo AI podcast, bringing you innovations and insights into AI on the edge. Now let's get started.
00:00:12:22 - 00:00:41:14
Unknown
Good morning, good afternoon or good evening, depending on where you are in the world. Welcome to EU world's engineering training days. Your training days is a series of webinars where we discuss all things related to electronics. Today's session is the rise and future trends of edge computing and AI, sponsored by Advantech and with help from From a Halo.
00:00:42:09 - 00:01:05:18
Unknown
Our speakers today are Charlie Wu, associate director of product management at Advantech, and DC Smalley, general manager of North America at Halo. So, DC want you had a little bit to say first. So why don't you go? All right. Sounds good. Thank you very much, ma'am. So? So, yeah. So I'm DC Smalley, the GM for North America at Halo.
00:01:05:18 - 00:01:25:05
Unknown
So I'm handling our sales and application engineering and, go to market in North America. Happy to be here. So so yeah, we're going to we're going to talk a little bit about the types of edge AI starting out with the types of AI that, at least in the halo nomenclature that that we identify in the market today.
00:01:25:06 - 00:01:46:15
Unknown
So starting with with this slide here, this is sort of the evolution of the different types or categories of AI. And I'm not talking about artificial general intelligence. Believe that for for sci fi hopefully or a few years down the road. But but real practical types of AI that we see in the market today and that we're supporting with products and technology.
00:01:46:15 - 00:02:12:21
Unknown
So starting at the left, perceptive AI, this is kind of the the most common type of AI that we see in practice in the industry today. This is this is this is, looking at a scene, looking at a video typically and getting an understanding of what's in the scene, who's in the scene, where are they going, what's happening so that you can make a you can make determinations about what you should do.
00:02:12:21 - 00:02:34:03
Unknown
So for example, if you're in an, in an, an automated car, you're in an autonomous driving car, you are perceiving the environment around you, understanding the trajectories of the objects around you and making determinations from there. This this is the like I said, this is the most common type of AI that we're that we're seeing in practice today.
00:02:34:05 - 00:02:59:01
Unknown
It's generally sort of video driven. So it's making a it's making these determinations from video data. And it's really about taking a lot of data and distilling that down into a smaller amount of data. So for example, if you're looking at a scene, a video scene, you're taking a lot of video data. And here's distilling that down to something that is an understanding of what's going on in that scene.
00:02:59:01 - 00:03:44:05
Unknown
So it's sort of a data reduction, actual, process and perceptive AI. Next we have enhanced AI, which is a little bit more, I think, of, Halo nomenclature. But what this is doing is really using AI techniques to improve or impact the content itself in the video. So this is about content beautification perfection, basically improving the quality of the video through things like, denoising in particular doing denoising at very low lighting, image stabilization, you know, auto exposure, things that can normally be done through typical ISPs or other, image image processing techniques.
00:03:44:07 - 00:04:08:09
Unknown
But when you apply AI and semantic awareness, the understanding of what's in the image, you can do this more sensibly and you can get better results in terms of low light performance, for example, or other aspects of the image quality. Can, can, can, can be, optimized through AI. This is not really increasing or decreasing the data that's in the scene.
00:04:08:09 - 00:04:32:00
Unknown
This is about improving the data that's in the scene. And on the right hand side, this is getting all the hype these days. This is generative AI, which I'm sure most, most of the users, know about generative AI. This is this is content creation. This is things like lama ChatGPT stable diffusion. So creating language, creating text, creating graphics, creating video.
00:04:32:00 - 00:05:10:22
Unknown
In the case of things like sort of from a smaller amount of data. So this is actually, as its name implies, generating data from a small amount of data like a prompt and creating a scene, creating natural language, etc.. So these are sort of the three categories of AI that that we see today. And all of these are actually, or can be used in conjunction with one another to sort of optimize what you're trying to do in a given system, for example, and handsome AI, if you first beautify the scene in terms of taking out the the noise or improving the image quality, you can also get better analytics from that scene.
00:05:11:00 - 00:05:34:07
Unknown
So some of these, there's definitely a, an intersection across all of these, these types of AI and, but but it's important to understand, I think, a little bit about the distinction between the three types of AI. And what we really want to talk about is, is, you know, why is and what kind of AI are moving to the edge and why.
00:05:34:08 - 00:06:00:00
Unknown
So first, you know what is the edge. What what do we mean by the edge? In the simplest terms, the edge is the furthest, most point closest to the object that you're detecting or the or the scene, etc.. So this could be as close to the edge as the actual camera sensor, right? So we can run AI workloads directly at a camera that's about as close to the edge as you can get.
00:06:00:02 - 00:06:22:20
Unknown
And then on the flip side, you've got data centers, right, which are typically training models right there or serving up websites. And so this is the furthest most away from the edge. And now as you get in between those two points, there can be there can be different opinions on if it's edge fog and cloud, there's, there's different terms.
00:06:22:22 - 00:06:45:10
Unknown
But in the simplest terms, think of edge as the, the point at which you want to make the determination and the cloud, the point in which you're training the data to be able to make those determinations which are then made at the edge. So now why would I be moving towards the edge? Right. So, you know, Halo is an AI chip company.
00:06:45:11 - 00:07:13:09
Unknown
What we focus on is inference at the edge. And so what we're seeing is a huge evolution of the types of AI that can and should be run at the edge. And these are some of the reasons why we are seeing this movement to to the edge with AI. So the first is real time insights. Anyone familiar with autonomous vehicles, Teslas, for example, you know that reducing latency, enhancing the user experience is critical.
00:07:13:09 - 00:07:35:21
Unknown
So for example, if you are using video to make determinations about the person that may be passing in front of your vehicle, you can't wait for that. Determine that that scene to be transmitted up through multiple points in the network back to the cloud processed. So the cloud determining is that person passing in front of the car or not.
00:07:35:23 - 00:07:54:11
Unknown
And then routing that information back to the vehicle through several hops. It's too much latency. The user experience, especially if you're the person passing in front of the car, is going to be terrible because you're going to have too much delay and you can't you can't tolerate that kind of delay in many systems, many embedded edge systems.
00:07:54:13 - 00:08:24:02
Unknown
Tesla is just one example. There's there's a lot of cases where you need to process immediately, sort of similar to the real time insights or availability. You know, a lot of the applications that we see at the edge today, do not have the luxury of relying on a constant Wi-Fi or 5G access to cloud. So the network to the Wi-Fi, there are applications such as if you're monitoring a train track for somebody who might pass in front of it.
00:08:24:04 - 00:08:47:04
Unknown
I think anyone who has been on a train, on a train, or seeing train tracks knows that many, many miles of train tracks have are in areas that are very rural with no connectivity. So you need to be able to run these analytics, make these decisions, determine what you need to do at the edge without having to rely in many cases on Wi-Fi access back to a cloud to make the determination.
00:08:47:04 - 00:09:19:00
Unknown
So availability is critical in many of these applications. And then you move on to to to high accuracy. Again, this sort of follows from availability. If in and again, as I talked about a lot of the use cases around AI are relying on video data to supply the information needed to make, make some determinations. And as we also know, transmitting and processing video data can be extremely bandwidth, heavy on the network.
00:09:19:02 - 00:09:51:14
Unknown
And costly, especially if you're paying for the ingress and egress of your data in a data set. So what typically you could do is reduce your data rate by either reducing the resolution of the image that you're taking in, or skipping frames and reducing the frame rate of the data that you're taking that in both of these cases, what you're doing is you're throwing away data that you could have otherwise used to make determinations of what is happening in that scene, and therefore you are reducing the amount of data that you can use to make solid claims about what you see.
00:09:51:16 - 00:10:18:01
Unknown
So you you can therefore increase your false alarm or miss detections in certain applications. So if you can process all of that at the edge, you don't have to reduce the resolution or reduce the frame rate necessarily before off, before pushing it up to the cloud. Instead, you're processing it right at the edge. You can increase your accuracy rate, and reduce your false alarms or miss detections.
00:10:18:03 - 00:10:40:11
Unknown
And the last, you know, the last of the four, maybe the most important for or is the most important for many applications. And this is improved privacy. You know, a common application is something like, for for AI is detecting, if you have a like in a hospital situation, you have a patient, you need to detect and make sure, you know, they're getting the proper care.
00:10:40:13 - 00:11:14:07
Unknown
And you need to feed that information through a hospital system. If you don't or you can't in many cases transmit that very private data up to the cloud to, to be to be worked on by, AI in the cloud. So instead you if you process this directly at the edge, you can use a lot of techniques not to, you know, either to, to shroud the person's face or not even transmit that information, keep it all within the local cloud, but still get very effective, detections.
00:11:14:07 - 00:11:45:12
Unknown
And so forth on this data. So you it's a way to significantly improve the privacy of, of, of the data and the, and the, the people that are in the scene. So, so this is why I tends to move to the edge and why we've seen such a huge, transition from, you know, running all the workloads at the cloud to now we're seeing many of these workloads running at the edge, and they can be running on a variety of different technologies as well.
00:11:45:15 - 00:12:18:19
Unknown
Right? We we know that there's CPUs that can be used to handle certain AI workloads. CPU central processing unit. It's a very it's an amazing general purpose processor. But at its core, the way a CPU is architected, it's about central processing. And if you look at the way that AI and the types of, AI models are structured, they require a lot of parallel updates to, to, to connections and to weights.
00:12:18:19 - 00:12:41:20
Unknown
In an AI model, an AI model is filled with connections and weights, and it's a very parallel type of process. So a CPU can get bogged down by the fact that it is a very central processor. It cannot handle these updates in a parallel fashion as well. So we can not handle as sophisticated an AI workload as something that's been built for AI.
00:12:41:22 - 00:13:13:11
Unknown
And that's where something like a GPU can come in. A GPU is an amazing general purpose sort of brute force way to apply what was a graphics processor? I mean, that's what a GPU is. And what a GPU adds is the ability to do parallel processing very quickly. It's a high bandwidth type of processing. And so GPUs do offer a really good way to enable AI workloads, but they're not really, they're not really optimized for the power and the efficiency at the edge.
00:13:13:13 - 00:13:38:00
Unknown
So they're not necessarily optimal in all cases. We also have FPGAs which are which can be similar. They have a lot of parallel processing capability, but they're very purpose built to a specific application. And so that's where products like NPAs, like Halo mix and some of our, some of our competitors, which are really structured from the start for that parallel processing and to be flexible enough to be able to be.
00:13:38:05 - 00:14:01:21
Unknown
So the weights and the connections can be updated as models are changed to be able to run many types of models, many types of pipelines through a single device, like a halo edge accelerator. So those are the kind of four main types CPU, GPU, FPGA, and the newest way, the most efficient way. And my view is to run these on on MPs.
00:14:02:02 - 00:14:20:11
Unknown
So those are the kind of key technologies that we see as well. So now I'll talk a little bit about, you know, to bring those all back down to a couple of different examples that I think, would resonate with the, with, with the, with the audience. So these are some of the key markets that that Halo works in.
00:14:20:11 - 00:14:40:11
Unknown
And I think, as you know, all AI companies would be focused on these type of workloads. So the first is security. And this is a very natural thought. Right. So you have things like access control. You are determining from a scene. Is this person permitted to come into this area or has this person got into an area they should not be allowed in.
00:14:40:13 - 00:15:06:13
Unknown
Right. So this is this is this is taking, you know, visual data from, from a face or from a person and, and deciding, you know, making decisions from there. And if you extend that beyond just security, if you have saying a large box store or a Home Depot or a Walmart or something like this, and you have cameras distributed throughout one of these installations, you can use those cameras not just for security, but also for retail analytics.
00:15:06:13 - 00:15:23:04
Unknown
So you can start to not just know who should be or should not be in a certain place, but you can start to look at the flow of of the people that are going through the store where they're lingering, where are they spending their time? And maybe you want to reposition your products so that you optimize for where people are spending the time in their store.
00:15:23:06 - 00:15:46:20
Unknown
AI does a great job at the edge to be able to provide you these analytics. Industrial automation is another key place for for AI at the edge. Things like robotics and, automatic optical inspection. Robotics are similar to an, autonomous vehicle. Right. It's it's got to navigate the world around it using vision. Sometimes you had lidar or radar into that, but primarily vision.
00:15:46:22 - 00:16:17:18
Unknown
And it's got to make decisions based on the bit based on the environment around optical and, optical, automatic optical inspection. So you've got a production line that's running, you know, thousands of units per, per minute across across it. And you need to detect what's good and what's bad. AI does an amazing job of picking up at the edge, picking up, patterns and detecting what is anomalous, what is good, and deciding what you should pick out of the process.
00:16:17:20 - 00:16:42:05
Unknown
We've already talked about automotive. Personal compute is another key area that we're seeing, a movement. And this is accelerating towards, AI at the edge. So this, you know, many people saw the announcement maybe a month ago or so from Microsoft, where they released a lineup of AI PCs. But what their goal there is to if you go back to the ChatGPT example, right.
00:16:42:05 - 00:17:10:16
Unknown
ChatGPT is a large language model that is run primarily, has been run primarily out of the cloud. But if you're if you're working with a PC and either you don't have internet access or maybe other reasons, you still want to be able to run these workloads, you need something that can run AI workloads. And so that's where we're seeing now, a movement where the personal computers are typically leveraged have been leveraging CPUs.
00:17:10:18 - 00:17:36:12
Unknown
Now they're adding NPU network, proper, neural processing units or integrated neural processing units into their products so that they can run ChatGPT or soar, or any of these gen AI models directly on the edge without having to rely on the, on the, on the Wi-Fi or the access to ChatGPT in these in these data center type, applications.
00:17:36:14 - 00:17:52:04
Unknown
So that's a few, concrete examples where we're seeing AI. Now, I'll turn it over to Charlie, and I think he'll he'll go through some advanced, view of, edge AI and, and examples there.
00:17:52:06 - 00:18:38:05
Unknown
All right. Thank you very much, DC. So this is Charlie from Advantech. And, before I go into some of the examples I have here, I just want to quickly preface that, from my perspective, the idea of edge computing, quite frankly, is hardly anything new. I've been with Advantech for over 20 years, and we have many applications, like, for example, just a parking lot that's that's part of an access control, where traditionally you use, different methods, like a, you pull a car, you press a button, and then when you pay, you slip a slip, a, payment in into the system.
00:18:38:07 - 00:19:02:20
Unknown
Those are still available today. So it's not something that's, kind of a kind of really engine. But now, given the capability of AI, you have additional ways to collect payment, for example, license plate recognition that's associated to the account that you set up on your on your phone. And you can just pay the parking fee via your phone.
00:19:02:20 - 00:19:31:13
Unknown
Instead of having to reach out of your vehicle, try to access the, terminal, and you may or may not be able to reach that far, so you end up having to open the door. So those are the things that the AI actually is going to start to make an impact. As far as, making our lives easier and some of the vertical market that from event standpoint, but we see actually starting to utilize edge AI.
00:19:31:13 - 00:19:55:01
Unknown
The first one is actually health care. We have been in health care for many years, in patient monitoring and many other different vertical applications. And what we do see, particularly in edge AI for healthcare is surgical. So some of the example that I share here today, the orthopedic surgery, are spine surgery, advanced endoscopy, surgical care.
00:19:55:03 - 00:20:33:18
Unknown
Those are actual real applications that we're working with different companies that are bringing into the world right now. So, for example, the orthopedic surgery we have, utilizing different, sensors, cameras for analyzing the, images that is, captured by x ray to help optimize the surgical workflow to augment the in image, interpretation and enhance the surgical outcomes and actually be able to maximize the, clinical value from the technology, investment.
00:20:33:19 - 00:21:09:06
Unknown
And in order to optimize the performance and outcome for both hospital and surgery center environments. So that helps the orthopedic surgery to becomes more successful. The augmented reality, the spine surgery, we're able to utilize the imaging to create an accurate, accurate 3D anatomy that allows, the doctors to, get those, visualizations, giving them ideas on where to insert tools, how far do they need to insert tools and to get a successful surgical outcome?
00:21:09:08 - 00:21:39:01
Unknown
As far as the events, endoscopic surgery, surgical care, you have the end to end. We have AI powered software. We provide intelligent, data driven patient communication and also real time interactive AI for surgical, analytic and also feedback. Now, when it comes to surgical, when you include AI in it, as Theseus mentioned earlier, if you for example, let's take a look at the, spine surgery.
00:21:39:03 - 00:21:55:20
Unknown
You're in the middle of a procedure. You want the computer to tell you how far you need to go. And as you go, the data goes all the way back to the cloud. Say, okay, maybe you should stop now. If I was that delay, you're drilling a little bit too far, and now we have a serious health problem.
00:21:55:22 - 00:22:28:17
Unknown
So that's where that's where, system needs edge AI, particularly for the surgical application in this case, to make sure that, there's a patient safety involved and there's some things definitely we do not want to mess with. Then let's take a look at the next example is security, which also resonates with what DC has mentioned earlier. And we have, actual real working relationship with, Halo right now in the security application.
00:22:28:19 - 00:23:09:08
Unknown
The application, quite frankly, is in the sense of, objects in object identification. It's security going through, going, going through is going to scan your body, scan your, belonging, scan your person whether or not you're bringing anything dangerous, whether or not you're bringing something that is, acceptable, it's going to be able to quickly identify what you have was on your person was hiding on your presence through various different technology ways of, sensing, imaging to try to figure out if you are safe in this certain location.
00:23:09:10 - 00:23:31:10
Unknown
And if you look at the lower right hand corner, you got a guy in the public space holding a shotgun. Quite frankly, that's certainly not something that you want to see in person. And as far as the safety is concerned, that should be immediately identify, inform this, inform the authority or the security personnel to get involved and take care of the public safety concern.
00:23:31:12 - 00:23:58:13
Unknown
Then we also have facial recognition. That's another application that I'm currently working on right now, where a facial recognition can actually used for various different applications, including access control, including marketing, marketing and technology. So like, if you walk into a store, it can recognize your face. You are the VIP, you can come up with your age, you can come up with your, information if you're willing to share.
00:23:58:19 - 00:24:23:16
Unknown
So is, I know I'm veering a little bit away from the security application, but at the same time, it can be used for other purposes as well. Behavior tracking, whether or not the behavior based on the eye imaging. Are you in the area, how you holding a baseball bag? Going to the object identification. Are you breaking a car window?
00:24:23:18 - 00:25:06:07
Unknown
Oh, those can be captured. Analyze and actually prompt immediate response may or may not have the time to immediate to go back to the cloud asking, okay, is this guy doing something to sanction, do I need to do do I need to do something in the sand bag? If it's really nearby, consent to the authority and immediately informed them, permanent security is also another application that I do see where you can set up, like a, for example, if you have a heavy equipment yard, you certainly don't want some unauthorized people rummaging through and try to steal things or whatnot.
00:25:06:07 - 00:25:35:23
Unknown
So are they were are people were they're not supposed to be, who is authorized going back to the facial recognition. And that's going to the next point, the worker safety, for example, if you're in a place where you have a lot of heavy equipment, you're walking with something dangerous you need where in the construction site, for example, you need to wear your hardhat, you need to wear, the, reflective jacket.
00:25:36:01 - 00:26:01:23
Unknown
Are those workers wearing those protective gears? The worker safety part of the AI application can identify those objects, whether or not they are being worn properly, and immediately notify the the shift supervisor. Hey, certainly the employees are taking off their hard hat in the area that they're not. They're supposed to wear it. So you might want to get in intervene.
00:26:02:01 - 00:26:25:01
Unknown
How to talk to the worker to make sure that they're in compliance with the safety protocol. So things like that. In the security applications we see more and more, edge computing was AI capability being implemented going. And then as DC mentioned earlier, this the other one, and this is actually the heart of Event tech's computing solution.
00:26:25:01 - 00:27:01:02
Unknown
We are in the business of industrial automation since 1983. So we have a lot of application that goes involving with industrial automation. And one of the very important aspect is the, inspection and quality control. Traditionally you may require the, workers to just look at all this product that's passing by and try to pick out what's, not up to not up to the specification, but right now we have, several different tools available, including cameras, including sensors.
00:27:01:04 - 00:27:33:07
Unknown
The idea is the system now needs to have sufficient IO to connect all those different sensors, different, equipments. And that applies to all the other previous example as well. So we have cameras to, help detect the, visual defects. It can be like, the label is not printed properly. It can be. One of the applications I have here is in, making of the cereal cereals.
00:27:33:07 - 00:27:57:19
Unknown
So you have, corn flakes coming through the conveyor belt, and the camera can detect which one is, the spec. And now you have, like an air pressure gas to blow those flakes out. Out and sort them out so that that is one of the possible application based on edge computing. And you once again, you certainly don't have the time to say, okay, the set flame needs to be taken out.
00:27:57:19 - 00:28:28:04
Unknown
The application must be deployed into the edge. I is connecting to several different equipments to perform the proper, inspection quality control job in a timely manner and other things on the industrial computer. We need those Io's and some of the IO that existed on the, industrial computer is not available on the commercial computer. You look at the, laptop these days, you you talk about computer.
00:28:28:06 - 00:28:52:14
Unknown
What is that, Gpio? What is that? Combust? Why do I even bother to have that? But. Oh, those are, examples of industrial io are actually very important in the sense to connect all the different sensors, different equipments, to be able to control things like a, the air pressure gun that I mentioned earlier, the speed of how the conveyor belt is moving, controlling servo motors.
00:28:52:16 - 00:29:20:04
Unknown
So o and control, say for example, gates to open like, very often you see particularly agricultural food processing application, different size of eggs, different size of fish. How do you store them? Those are being usually controlled by, camera and then also the Gpio to get or at the same time to control the mechanism to send them to different sorting locations.
00:29:20:06 - 00:29:49:11
Unknown
Inserting that internet or wireless user and then basically some form of a communication, some from my understanding, there's definitely some machine learning aspect of it. In regards to the edge, I mean, AI is not just to deploy what needs to get done. Yeah, it also needs to be updated constantly in order to improve upon itself and help collecting data to feed back to the mainframe, to assist the machine learning.
00:29:49:11 - 00:30:23:17
Unknown
So there's that feedback now in this, the, wireless communication or the hardware communication allows the system to continuing to apply those data to the, to the data center at the same time receiving updated update, to the deploy the model as necessary. So, we look at food processing, AOA inspection and manufacturing quality control for different manufacturing application is something that we do see happen very often.
00:30:23:19 - 00:30:52:08
Unknown
So from the hardware space and from the hardware standpoint, when you look at it, what is necessary to so like a good edge AI computing come as as far as the common requirements, scalable performance, different.
00:30:52:09 - 00:31:18:04
Unknown
Well, we seem to have lost Charlie. Maybe we should, move on to the Q&A. At least we still have DC here. Charlie, I'm sure it's probably reconnecting or, refreshing his browser. That's often the case around here. So, DC, let me let me, let me give you a couple of questions to, to get started with here.
00:31:18:05 - 00:31:48:11
Unknown
And, one of the things that when I, when I saw this, I was looking at this previous slide here, slide ten is industrial computers have been around for a long time. And, but I usually think of them as if you look at this box, you see lots of IO. And I usually think of those industrial computers as pretty much they're there to handle the AI.
00:31:48:13 - 00:32:17:22
Unknown
How have they become more powerful? I mean, processors get more powerful all the time, but how how have these has there been a big jump in the computing power that's going into industrial PCs, industrial computers to handle the I well, so from the to focus on the actual compute power itself, that's that's a little bit more of a Charlie question.
00:32:17:22 - 00:32:55:03
Unknown
But when I think about what has been required at the industrial PC level to to handle AI workloads, it's not it's not really about increasing the CPU power. I mean, you can you can certainly throw more CPU cores at the challenge of AI and, and scale that up to be able to handle larger and larger workloads. But what you're what you tend to be doing, as I as I spoke about in the first part, the, the architecture of a CPU is, is it's not by default designed to handle these kind of AI architectures.
00:32:55:05 - 00:33:27:12
Unknown
So you can scale up the CPU in a huge way, but you're probably going to be over overspending for a CPU. What you really need to do is scale up your AI capability to match the AI workload, or the types of, the types of applications at that particular AI compute box needs to handle. So, and this is one of the reasons that I think advanced AI can Halo is a good is a good pairing is is we offer an AI accelerator that can be added on to Advantech system.
00:33:27:14 - 00:33:54:20
Unknown
So you don't have to scale up your CPU necessarily. You can scale up the AI only and get really the right recipe of compute for the tasks the compute or CPU is well-suited for. And then you can scale up the AI to match the appropriate level of AI that things like object detection and classification. Like these types of AI workloads, you scale those up at the level that you need them.
00:33:54:20 - 00:34:17:03
Unknown
So, it's almost like memory and storage. You know, sometimes those things scale up together. Sometimes you need more memory and then you need more storage. But sometimes you need a lot of memory, but you don't need a lot of storage. And I think the parallel thing is the same for CPUs. And I. The workloads are different. The applications are becoming more different than, than ever.
00:34:17:05 - 00:34:53:11
Unknown
And so it's about getting the right recipe of CPU, an AI compute, I see. Okay, this is a question. I'm back. Okay, Charlie, it's good to have you back. To our audience, this was Charlie's last slide. Anyway, so we're going to move into the sort of into the discussion here. Sure. So you see, you see you mentioned in, in your presentation, you talked about, inspection and manufacturing inspection and knowing whether something is good or something is not.
00:34:53:12 - 00:35:34:19
Unknown
No machine vision software and, and all of that, that whole machine vision, industry has been around for a long time. So how does bringing AI into that how how is that improved or how does that improve the ability to, to handle, you know, to handle these possible defects? Yeah. So, so things that you know, traditional machine vision techniques, I mean, you're generally looking for very specific characteristics and things around edge detection and, and problematic, you know, problematic images around these edges and so forth.
00:35:34:21 - 00:36:01:10
Unknown
What I is, is best at or one of the things best that is identifying, patterns where we would not recognize the patterns. Right. And so, so, so as you so as you're getting to a point where you don't want to like with traditional CV techniques, you don't necessarily want to train for every instance, like you don't want to design an algorithm for every instance that you'll see.
00:36:01:12 - 00:36:27:05
Unknown
Instead, what a better approach to do is feed in a ton of data and let I do what it like I said, like it does best, and identify these patterns based on a lot of data that it has available to it to make these detections or make these conclusions. So it's less about, you know, and this is the whole difference around training in AI based on a ton of data versus sort of a defined algorithm.
00:36:27:11 - 00:36:48:08
Unknown
The defined algorithm, you have to have a person in the loop that defines what is good, what is bad with with AI, what you're relying on is an enormous amount of data. And for the AI to make different type of conclusions, different connections, but come to better conclusions about what is a defect, what is or what is a person walking through, through, through a scene.
00:36:48:08 - 00:37:25:22
Unknown
Right. So it's all about training on a bunch of data versus, you know, pinpointing the exact algorithm that you have to rank to identify the, the anomaly or the defect.
00:37:26:00 - 00:37:51:10
Unknown
Oh, I think we've lost you for a second. Martin, your your audio is not coming through. Yes, I was I was, trying to, not so that you. None of you would have to hear what you didn't hear anyway. Charlie. So, Charlie. Welcome back. And, my client, the question is your the box that you showed a couple of slides back, actually, let me jump back to that.
00:37:51:12 - 00:38:16:18
Unknown
So back here, you've got, an industrial computer. It's got lots of IO on it. It's got serial. It's got this one's even got a DVI. You've probably got VGA on there. You've got land, you've got USB, all of these things because you never know what's going to connect to it. How does this but then you mentioned wireless LAN for more for data collection and system.
00:38:16:20 - 00:38:45:19
Unknown
How does would someone use use this type of industrial computer with say, a wireless IoT? How does that how does how does an IoT and this industrial computing, how are they used together? And do you have applications that you might be able to cite, to tell us about how what I'm calling IoT connecting to your, connecting to this, this type of computer.
00:38:45:21 - 00:39:12:04
Unknown
Certainly. Say, for example, there is interfaces within the system that you cannot see from the outside and typically in the form of, like a mini PCI express or the newer end that to, different type of keys for different type of application. We have available for like a WiFi, a Wi-Fi slash Bluetooth, 4G, 5G connection.
00:39:12:04 - 00:39:51:13
Unknown
But from the IoT standpoint, you're probably talking about something on the, like, low, or, but the, those type of, low, low frequency but then longer distance communication type of implementation. And we have applications or working on applications and things like, detecting pressures whizzing oil, transmitting pipes or whatnot. So those can certainly be implemented into our hardware solution, just depending on the type of technology that's needed.
00:39:51:15 - 00:40:09:23
Unknown
What kind of, wireless communication protocol is need to be needs to be implemented, and we can install the module accordingly to create that capability with our hardware solution.
00:40:10:01 - 00:40:14:15
Unknown
Martin, we ask you again.
00:40:14:17 - 00:40:44:11
Unknown
Okay. So are there any standards that come into play here? And I'm not talking about the communication standards. They they're all well defined. But are there any standards in terms of and it may be very much industry, dependent, but are there any standards that, that our, our viewers should know about, regarding, industrial computing and, and such like that?
00:40:44:13 - 00:41:14:06
Unknown
As far as the standard goes, I guess it's like you say, it's an industry related. For example, if you're dealing with, is I in the form of a certification if you. Well, there are, for example, Mark needed for, us come in first. How if, when you're installing a computer into a bus, there is the, certification.
00:41:14:06 - 00:41:45:15
Unknown
What was the IEC or no, N5 050155 for real world type application. There's the system IP rating. So there are many different standards not related to protocols not related to communication. But when you're implemented, particularly industrial computer, into harsh and harsh, more harsh environments, then there are all different type of the standard that applies to the computer that is necessary to survive in those type of environment.
00:41:45:16 - 00:42:12:08
Unknown
The most common type that we do have is not necessarily a standard, but more so just how hot is the going to be within the within the implementation? It's not always going to be nicely air conditioned room. They might be outside for it could be easily as an outdoor signage kiosk. It could be something that is that is needing to be in a warehouse with no air conditioning.
00:42:12:14 - 00:42:28:14
Unknown
So many of our computer doesn't matter is boards or the systems that we offer just computing solutions we offer those in the industrial came industrial temperature capability to make sure that you can survive the, type of environment.
00:42:28:16 - 00:42:56:14
Unknown
Okay. And, so questions of we've got some more questions coming in. So here's, here's one, how much of an impact does the capture hardware being lens capture, a camera lens resolution and so on affect the performance of computer vision related tasks? That is, how much can an AI accelerator tolerate with low resolution devices? Anyone want to try that?
00:42:56:16 - 00:43:19:17
Unknown
Okay, I'll take that. That's okay. All right. Go ahead. Okay. So, in general, I mean, the the kind of the answer is, you know, the resolution, the camera type, those those should be, you know, defined based on the task at hand. So, for example, if you're doing optical inspection and you're looking for fine detail, you need a lot of resolution.
00:43:19:19 - 00:43:52:18
Unknown
You can use generative AI to simulate, you know, details. But if you're really trying to do optical inspection, you want a high resolution image. You don't want generated data, right? But at a high level, we also have like in certain applications like security and others, where customers that are designing or using very low end cameras that already exist say like they're coming in, they have a few very inexpensive cameras in a small business that are that are used for security.
00:43:52:20 - 00:44:20:20
Unknown
If you add on like an edge system from Advantech with an accelerator or, from Halo, you can still get extremely good results from very low end cameras. You know, for detecting, you know, where people are, where they shouldn't, shouldn't be. So so the answer is it's dependent upon what you're trying to achieve. And but but an accelerator can get very good results out of a very low end camera.
00:44:20:22 - 00:44:43:05
Unknown
And in some cases can use things like enhance of AI, you know, improving the quality of the image to compensate for the fact that you're your lens or your or your camera is not the highest quality. So in some cases, you might be able to get away with a lower cost, lower quality camera with better analytics, which an accelerator could give you.
00:44:43:07 - 00:45:12:12
Unknown
Okay. Let's, let's move into another another section here. And, this one's going to be for DC, obviously, but, why Halo. And then I'm going to have a follow up question from, from the audience. But, why don't you go ahead with that, first? DC sure. Thank you. So, so again, there's kind of two main classes of, or how you, you know, how to think about AI.
00:45:12:14 - 00:45:42:19
Unknown
There's training AI and there's inferencing, or doing inferencing with AI. So typically training is done in a data center. We all hear about the enormous amount of power it takes to train up ChatGPT, etc. it's is not focused on training on on training models in a data center. What we are focused on are the highest performance, lowest power or highest, or your best power efficiency, best cost efficient edge inferencing devices.
00:45:42:19 - 00:46:06:21
Unknown
So we're really purpose built to handle inferencing at the edge. We offer, you know, very significant level of AI top. So we haven't really talked about, you know, Tera operations per second. But that's really how you measure you think about AI performance. So from a from a just a straight AI performance perspective, we have solutions starting at 13 tops, going up to 208 tops.
00:46:06:21 - 00:46:32:09
Unknown
And beyond. To give you a ton of capability for smaller workloads, smaller number of, you know, cameras up to, you know, dozens of cameras or hundreds of cameras. So you've got a lot of AI to work with. You've got a versatile set of form factors to work with. We've got in about two PCIe E1 s. So in terms of flexibility and power, we've got you covered.
00:46:32:11 - 00:46:53:07
Unknown
The other thing that's critically important to any AI, well, any AI design is you want to make sure that you can actually bring and work with the models that you want to work with. And that's where our software we haven't talked about software too much, but we make silicon. But the software that accompanies our silicon is really powerful.
00:46:53:07 - 00:47:12:11
Unknown
In order to be able to work with any type of model that, you know, our customers want to work with, we have hundreds that have been ported to Halo already, Transformers and convolutional networks. Those are the two main types of networks that we see today. And so that software needs to be powerful, easy to use, and let you work with your models.
00:47:12:11 - 00:47:28:18
Unknown
And that's that's the other aspect. So from a versatility in a software perspective, you know, I think we think that we're, we're, we're head and shoulders above the rest. So that's that would be why I would say, why do you go.
00:47:28:20 - 00:47:56:08
Unknown
All right. It looks like it's my part. So why event tech? We are in we're in this business since 1983, as I mentioned earlier. And, we are a pretty successful company so far in offering the widest variety of the product available when it comes to edge implementation. And that is the main reason that we have, so many different available solutions.
00:47:56:08 - 00:48:23:06
Unknown
Just to, give you an idea, we have, single book computers with various different sizes, industrial motherboards, different type of systems. As a general purpose system that one of them you have seen earlier, those type of implementation was a lot of a lot of different Bios allow connections to different sensors, different, hardware solutions and expansion capability as well.
00:48:23:06 - 00:48:53:09
Unknown
Is some expansion is necessary to add on, GPU cores, say from Halo, we have mini PCI express, M2 or even PCI express for the higher performance ones. And then we even have, just a quick example here to show some very ruggedized design, designed to, for very harsh environment. This one has ip65 and everything not going to go into the details.
00:48:53:11 - 00:49:21:12
Unknown
But as far as the, vertical market solution is concerned, we certainly have additional hardware solutions that we can provide to the customer for their edge computing needs. Okay. Well, Charlie and DC, thank you for this informative, webinar and for answering lots of questions. To those of you whose questions we didn't get to, we can still answer them offline.
00:49:21:14 - 00:49:32:10
Unknown
And if you're watching this, on a recording, you can still ask questions and that will be able to will be able to get back to you with that.
00:49:32:12 - 00:50:00:11
Unknown
Again, thank you to Advantech and Halo for, this for this webinar. And just a reminder, this will be available on demand at EE Training days.com free world. I'm senior technical editor Martin Rowe. Thank you for watching.
00:50:00:11 - 00:50:15:12
Unknown
Halo Dot. I keep the conversation going by sharing this with your peers and never stop exploring the future of AI.