How is the use of artificial intelligence (AI) shaping our human experience?
Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.
All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala. In this episode, it is my pleasure to bring you Mieke de Ketelaere. Mieke is the Adjunct Professor of Sustainable, Responsible, and Ethical AI at the Vlerick Business School in Belgium. She is a prolific author, writer, and researcher on the topic of artificial intelligence and machine learning and was recently honored by Computable Belgium as the 2024 IT Computer Personality of the Year.
We're going to be talking about what it takes to ensure we're deploying AI sustainably, both environmentally and socially. Welcome to the show, Mieke.
MIEKE DE KETELAERE: Hey, thanks for having me.
KIMBERLY NEVALA: Now, Mieke, you are a self-professed tech optimist who is nonetheless concerned about some of the trajectory of our current AI deployment. So talk to us a little bit about how AI might be - or our current approach to AI, I should probably say - might be either entrenching or exacerbating the digital divide.
MIEKE DE KETELAERE: Well, it's a very good question to start with, I think. So I'm a digital native. I'm an AI engineer by education, so back to '92. Maybe it's interesting to understand the reason why I studied back in '92 some AI. And it was because they promised us that these were systems that were going to allow us to take uncertainties out of our lives.
And so for a very extremely long time, that was exactly the case. AI was helping tons of sectors on things we couldn't see with the human brain - which patient is going to have which illness, how much energy do we have to forecast, where is there potential fraud happening, et cetera.
What I see recently - and that's why I'm really putting forth all this sustainable and ethical focus - is that we got to a point where AI is putting many more uncertainties in our lives than it's taking out. And so that's where I really shifted my hats from an AI engineer to an advocate for ethical and trustworthy AI. And so that's where it's coming from.
KIMBERLY NEVALA: And what are the uncertainties that you are most concerned about? Are there some examples that highlight key issues you think we need to be particularly aware of as we continue down this path?
MIEKE DE KETELAERE: Well, indeed. I think today, and to be very frank, it's always the case as long as technology is in the hands of engineers, we love technology. We will do every simple step we can do with technology. But what we tend to forget is the impact of our technologies on people and planet.
So that's where, in my opinion, things are slightly going wrong.
And let me just explain that a little bit more. When we take the impact on people, I'm not simply talking about the known fact of bias. I mean, I think, in the meantime, everybody is aware of the fact that these systems can contain bias because they're based on human decisions from the past.
But in the human fact, I am more and more thinking about the fact that we are launching these technologies, especially the bigger platforms, the generative platforms, without any sense of preparation of the society that's using them or even companies using them.
And the fact that AI systems work the way they are supposed to work - they're based on advanced statistics - that means that they need to come with a little bit of explanation. The fact that they are not always right, that they come with a certain accuracy, that you need to understand the guardrails that have been put around them. All these things, we haven't taken time to explain that.
If you compare this to any other sector and I think of pharmaceutical sector. I think about car industry. I think about food sector. They do testing. They do documentation. They do a lot of steps before they launch a product to the market. What we've seen, especially with the bigger platforms like ChatGPT, that they are launched to the market without any, any documentation coming with that in an understandable language for those using them.
So that's one point that is really, for me, important. Because, on top of that, compared to any other type of technologies we've seen in the past, it's a technology that really is going to reflect on human way, on human intelligence, the way it creates images, the way it creates discussions or dialogues.
And so people out there that aren't busy with technology, day in, day out, they are not able to take a distance from the fact that it remains just a tool. Because at the end, that's what it is. It's a technology. It's a tool. But it is a very human-like tool.
And so, what we've seen, and that's only in the last couple of months, is that people start to make relationships with these tools because that's what we do as a human. When things seem very human, we're just going to make relationships with them. And so that's a point that's very important to tackle-- so not just bias, the fact that it needs to come with explanations, and on top of that, we are rushing like hell from a technology point of view.
And the gap between those understanding it, that it's just a tool, and those using the tool it's only getting bigger. And it might have an impact on many more things we can't imagine right now. Also the way we are going to have human relationships in the future. So that's, I could say, on the human part. We're rushing too much the technology. And the question I'm asking myself: why? Why don't we spend time first to explain it?
The second point is that over the last 70 years - because that's how old AI is - of course it's normal we always put accuracy as the main variable to compare systems. How accurate is your system? How good is it? How useful is it? Because accuracy, of course, investors are - if there's any type of championships, they will give the one that has the best system, the most money, or the best price, which is very logical.
Only accuracy in an AI system comes with the amount of data that you have and the compute power you have. That means that in our urge for accurate systems, we have started to increase and increase and increase and increase that data, the training part and also the compute part. And that comes to a certain cost at the level of energy.
This simple fact, I tell you, until two years ago, was unknown. Two years ago, we had a green AI initiative, and it was all very good. And I thought, are we finally there? But then generative AI pops up, and suddenly, this green AI focus goes all the way back in. Again, our search for perfection, et cetera.
We tend to forget the planet impact. And so these are the two parts where I think, like any type of technology in the past, we need to really turn around.
I mean, let me just, as a finish on this question, compare it to the cars. And the car in the '50s was bringing us very fast from here to the beach. That was our profit, spending time with family and friends on the beach. How cool is this? But then we realized that without seatbelts and without a driver's license, without road signs, that it was a pretty dangerous technology. So altogether, we created a framework around it so we could still enjoy the profit, but in a very much more human-protecting way. And for most of us, we know that in the meantime, we've also started to work on the motors of the car because they had a bad impact on the planet.
So I finish where I started. AI today is in the hands of engineers. Engineers understanding the technology, engineers looking for the profit - they have to pay back their investors. But not always putting the two other facts - people and planet - into the equation.
KIMBERLY NEVALA: Does this overall approach, the pursuit of the means as the end instead of looking at the end and what is it that we're hoping to accomplish…You've also noted that there's this weird, you called it an attitude of separationism, that is developing. How does that relate to some of the points that you just made?
MIEKE DE KETELAERE: Well, I would even challenge the point that we know where we're going to.
KIMBERLY NEVALA: Fair, fair.
MIEKE DE KETELAERE: I think we know where we are coming from. And sometimes I use comparison to the Tower of Babel, you know, the Tower of Babel. People were building a tower to get to the heaven. Everybody knew it wasn't possible to get to heaven, but there was God. There was God just telling us, hey, you know what? I'm going to give you all a different language so you understand in this way, indirectly, you're trying to reach something which is not reachable.
Here, in AI, first of all, the goal we're trying to reach, I think it's an unwritten goal, but it's to reach human intelligence. We already know that's impossible to reach, for many, many different factors. Because, first of all, we don't understand the word "intelligence." And secondly, we don't know how exactly the human brain is working. We understand already a lot. And, in fact, even AI has been helping us to understand the human brain. But there's a lot of things we don't know.
And like I said, at the level of intelligence is what is intelligence? Is it human? Is it nature? Is it animals? Is it plants? Et cetera. So there's a philosophical discussion we are not going to solve here.
But the point is we're trying to reach something which, on different parts, is unknown. But unfortunately, there is no God helping us to say, well, wait a minute, slow down. Sit down around the table, and let's first define what we're trying to reach.
And so that's, I think, a little bit of a shame. It's that we are rushing in a race, not understanding where the race is going to. And so it would be very nice to get a bit more grasp on that.
KIMBERLY NEVALA: Yeah. And how does the classification - because AI products are digital - but we classify them, we think about them as services. And to some extent, there's a lack of tactile or physical connection with them. And I'm wondering how that both impacts the consumer's perspective of these technologies? And we used to talk about, especially when we first started going online, things happening “in real life”. Well, digital life today is real life. So it's somewhat a quaint statement, I think, today, what's happening in real life versus what we were doing online. Today, there's no differentiator.
MIEKE DE KETELAERE: You no longer make a decision, because it's all the same. It's life.
KIMBERLY NEVALA: Yeah, it's just life.
MIEKE DE KETELAERE: But it's a very good question because, in fact, indeed, that's maybe the danger to any other type of technology we've seen is the fact that it's a hidden technology.
If you take it very strict to the definitions, AI is just the algorithm. It's not anything else around. It's not the sensors that are coming, for example, to take the pictures or to pick up the sound. It's just the algorithm that runs on a little chip that calculates a number. That number is then translated into a decision. And like this, we have now autonomous decisioning systems. And that can be a spam filter deciding this is spam, this is no spam, this is spam, this is no spam. But it can also be our autonomous car.
What we see is in the broad society is people call AI these autonomous systems because they need an object. Whereas, in fact, it's something in there. Just the calculation, that's the AI. But that's getting nerdy if we explain it like this.
But it's also where the mistake starts because people go like, your AI isn't working. But, in fact, it's not the AI. The AI is just a simple algorithm spitting out a number. I call it sometimes the onion layers around it where things start to go wrong because some engineers have put guardrails around them. Other engineers have put autonomous decisioning around them, et cetera. And it's in this translation between that simple number that AI spits out and anything else behind that things are slightly going wrong and where there's a lack of transparency.
And there are many, many, many, many onion layers. And so people don't know. It's almost like it's a Russian doll, where you take it out, you take it out, you take it out, and finally, you get to that chip.
But we buy the Russian doll because it looks cool. It's got colors, et cetera, if it's a car or any other type of technology or any other type of object. But, in fact, it's not transparent what's in there.
When people buy an Apple watch, they buy it for the watch and for the functionalities, not for the algorithms in there. When we go online and we search for next holidays, et cetera, we do it, and we look at the images, et cetera, but we don't know what data we leave behind and how algorithms are influencing any type we see afterwards. And so that's been a shame. And that makes it very complex to explain to people that it's a hidden technology and it's in fact just an algorithm.
KIMBERLY NEVALA: And it's interesting because there's a push - and I think a good push - within the responsible AI or trustworthy AI community for things like model cards or system cards. Back to your earlier point, though, it's not necessarily clear to me at this point that those, A, meet the need that they purport to meet, and, B, are really at all useful in a germane way for an end user. So I'm interested in your thoughts on those.
MIEKE DE KETELAERE: I absolutely agree. I mean, if you take it back now, if it would be in school or in university course, model cards are a good way to just put down what you're actually trying to build. To make you think about the data set, to make you think about your guardrails, to make you think about the functionalities, to make you think about when is this deriving from the original objective that we are trying to have for this.
What we see right now is, first of all, that the big model cards that are created are almost created by companies putting all their stuff in there, what can go wrong with the algorithm. So whenever they are being pointed out to be liable for a mistake, they can just say, well, sorry, it was in the model card. Well, this isn't very fair.
I mean, every time when I'm in front of an audience, I ask, have you used ChatGPT, almost all hands going up in the air. And that's from people working at home to people in big corporates. And when I then ask a little bit later who has read the model card, which they are perfectly fine with their model cards - ticking the box, the model cards are there - nobody has read them. Nobody has been pointed out that there is information in there. But the information in there is in such an unreadable format for people not in the AI business that even if they would read it, they wouldn't get anything out of it.
It is the same with our leaflets for medication. They're written in an understandable language. Do we need to understand all the chemistry that's going into our medication? No. But we need to understand how we can see that things are going wrong, that we can see that we know what to do if things go wrong, et cetera. But I think we are far away from this level with AI.
And then there's an additional point. A lot of engineers today - and I'm counting myself - we are very inspired with what the technology can do. Recently, I was just able to upload 10 pictures. And with a little tiny prompt, I was able to make almost realistic pictures of myself sitting in all types of situations that I wanted, all with free stuff. And there you go.
I understand that engineers don't have…they are not bad people. They just want to show off. Look what we can do. Look at us. But what they don't realize, or what we didn't realize enough in the past, is that the other side of the world, there's people that can have access to these type of tools, algorithms, or even data sets that have different moral standards. And so this whole principle of open-source and open sharing - the Hugging Face platforms are full of it - although I understand very much that this is very helpful in certain IT domains, I sometimes wonder, for certain sets, certain solutions in AI, if this is really a good approach.
And this is coming together: so having tools that are very hidden and very little explained in their system cards and then tools without system cards because you're not, in Europe, at least, you're not forced to create a system card for every system you bring out there. But then those are floating around. So those understanding them can be using them against us without us knowing.
So putting things back behind APIs, behind closed doors, would be very helpful. Because system cards, like you mentioned - take ChatGPT and OpenAI. Yes, they create system cards. But don't forget version 2 of ChatGPT was an open version. They saw at that time that it was creating deepfakes.
What did they do? They actually, for the version 3, GPT-3, was behind closed doors. There was an API. You had to explain why you needed to get access to the tool. And 3.5 was not a little open, it was just wide open because investors, the race, et cetera.
So you see there's these two points. It's not just good enough to have model cards. It's also a whole principle of open-source for a certain set of technologies.
KIMBERLY NEVALA: But it doesn't sound, based on what you just said, that open-sourcing or closing those systems - there's a commercial, I think, imperative or incentive for making something open or closed to some extent. But there's not necessarily a clean line to be drawn either, which is if we bring things behind closed doors, then no harm can ensure or it minimizes harm, right?
MIEKE DE KETELAERE: No, no, no, no. It's not as easy as that. No, no, no. I agree.
But just leaving everything completely open is probably not the best way. No, I absolutely agree.
But the point is that if something is free, well, first of all, you're not the customer. You're just, in our world, a data source. But it also allows you to create a certain, and I'm going to put it in an extreme word, addiction to it.
It's the same. When you get alcohol for free, people take it faster. Here, if you have these tools for people, people will start using it. I wonder what would happen if you make everything paying or if you have companies that are using AI having to pay a tax. All these things, we aren't there yet.
So, anything we've seen around regulation in other sectors, it takes years. It's a multidisciplinary debate. I think we shouldn't wait with AI to start these things to go deeper. And there I see a lot of people talking the same language, the same words. And there's good initiatives. But none of them have an impact.
The European AI Act, it's activated now, but there are so many exclusions on it already that it becomes a very complex puzzle. And so people are starting to look more how we can go around it then how can we apply it correctly. And I don't think you need a European AI Act to take your own responsibility and liability when you use AI.
KIMBERLY NEVALA: Yeah, and it is an interesting point about the just ready availability, even for the general consumer base. And there's this interesting catch-22 which, again - particularly around the current world we're in with generative AI and large language models - the thing that's really interesting and exciting about them is how easy it is to interact in our natural voice, if you will.
And that is also what makes them so very dangerous for consumers who do not, as you said, really understand - regardless of how much we say this - truly understand what the systems do, how they work, in such a way that we can have absolute confidence that if they are utilizing whatever "relationships” they're having with them, that they're doing that in a mindful way. Particularly, I was about to say particularly for younger cohorts, but I think it may actually be the opposite these days.
So when it comes to that, the regulatory sphere, kind of back to that point, does that issue of looking at these things as services versus products, is that also complicating? Is that exasperating how we go about approaching this?
MIEKE DE KETELAERE: Yeah, there's a lot of complications there because, indeed, I know from Europe it's a very complex question. Is it a product or is it a service? The problem is - and I think it starts with AI itself - there's not one AI. There's so many sectors. There's so many applications. There's so many flavors of AI that you can say in one case it's a product. In other cases, it's a service. But how do you put this into one regulation? So that's already a challenge by itself.
But I think, and that's what we observed in Europe, is that a lot of general regulations could already be applied. If you would do something, if you're not allowed to do something as a human, why would an AI be allowed to do something? So, in fact, we've seen we probably spent a lot of time on this European AI Act, but in fact, a lot of things can already be regulated with anything else that's out there: consumer protection law, et cetera.
But I would like to bounce back on something you mentioned before about the relationship part. Because indeed it is a bit of a toxic mix that was maybe unwanted. The fact that in the sense that the moment where generative AI was so broadly made available is just beyond COVID, a moment where everybody started to work on computers. t's also just beyond the moment where people, and especially the younger generation, got a little bit addicted to social media. And so you have on one side the society which is really having a little bit of a loneliness epidemic, where they're trying to find themselves, et cetera. And at the same time, you bring out a solution which is so humanized.
That's a toxic mix. And then you can have Sam Altman saying, people shouldn't make a relationship with it, and it's just a tool. That's easy to say, and I understand what he means, and he's right. But only I can tell you that myself, when I was writing my book at night, I started to talk to my Google Assistant because there was nobody around. So, I just wanted to get out some emotions, and I started to talk to him. And because it was talking back to me, I said, well, how cool is this? And so it's easy to say we shouldn't make a relationship with this, but it's not how the outside world works, you know?
KIMBERLY NEVALA: Yeah. And correct me if I'm wrong here, but I believe you're part of an initiative, the Safe AI Companion Collective, to address some of these types of issues. Can you talk to us briefly about that?
MIEKE DE KETELAERE: Yeah, absolutely, absolutely. So what we've seen - and that's based on that toxic mix - is that you don't need to wait another 10 to 15 years to understand that things will go wrong, might go wrong. They've already gone wrong in the past.
Unfortunately, we have some dramas that happened with these companion chatbots. And companion chatbots are either specific platforms where you can create your friend or your companion, or they can even just be the simple ChatGPT. So what we've seen is that people, actually, they initially start asking questions. These things will never be against you. They never will ask you demanding questions or put you back onto your spot. And so they're always pleasing. They're always in a pleasing mode. And so some people that are a little bit going through a difficult period in their lives, they seem to quite like that. They seem to quite, in fact, get addicted to this.
And even more, we also see that even with the simple ChatGPT is that, for example, here - at least in Europe - some workers don't go downstairs anymore to have their lunch with their colleagues. They now have ChatGPT they can ask questions to about their personal lives. We see consumers explaining their whole personal lives on e-commerce platforms and the chatbot that's on there. So what you see in society is that people like, I'd say, the character of these companion chat bots. Which are always, always fun, but that's unfortunately not the real world.
And so we are going from an attention economy; how many clicks did I get, how many likes did I get on social media. That wasn't fulfilling us enough. You got 32 likes. I got 33 likes. Oh, OK. Next time I want 34. So, it wasn't fulfilling enough. But we are now going to what we call an intimacy economy, so where it's not about the clicks and the likes. It's about having personalized conversations.
The only thing is, like I mentioned before, is that these personalized conversations, if you look into the brain, they only stimulate dopamine effects. They're wanting to, like any beginning type of relationship. But this is not how our brain is wired. After a while, it needs to go over in other types of, let's say, type of communications and dialogue styles. And this will never happen. AI will never do this.
So AI - it's my personal claim, I need to say this - AI will never, ever, ever be able to replicate a real human conversation or relationship. And why? Because there's many more factors that are playing.
So, I talked already about the neurotransmitters. But it's also the fact that when you see somebody, you also have emotions on your face. The odors that are coming along. The fact that you can go back to a memory for eight years ago, et cetera. All these things we are not there with AI, and we're probably never going to be there.
But it is, in that sense, for me, quite a danger to put these platforms out there without any, let's say, in context of younger children and any guidance to the parents or the educators, or even to medical staff, to explain how these technologies work. They haven't been able to test that. They haven't been able to test the impact on the brain.
Neuropsychiatrists, they're unaware of the long-term effect of these platforms. We are not there. It just went too fast. But what we see, unfortunately, is that those that got into trouble, that, yeah, there's no help out there. And that's why we created the Safe AI Companion Collective. And we do it on two parts.
So first of all, we want to make governments aware that today any type of platform is against privacy policies and against consumer protection law. So they need to be relooked at. That is the side of the government. And then we have another side that is really towards society. We create a lot of awareness towards initiatives that tackle schools, that tackle parents, educators, et cetera, to show them that, yes, platforms like Snapchat, et cetera, have these companion chatbots automatically switched on. So what does it mean to your child? What can it mean if you see that your child is more and more going online, et cetera in order to make sure that they can talk to a child?
So we're not against them. It's safe companion chatbot. It's not against. But it's just like what we've seen over analysis is that it's like any other addiction. If I put a glass of wine out there, I am not an alcohol addict. I can take that glass. Maybe I might some nights take a second glass, but I can have the self-discipline to put it aside. Say, with ChatGPT, hey, look, I use it every day, but I know exactly when to stop it. I know exactly which questions to ask. I know exactly not to get addicted to this. But unfortunately, like any other addictions, if you were in a special moment of your life, a more difficult moment of your life, things might get tough. And you might no longer see that split.
KIMBERLY NEVALA: Yeah, it's interesting. We talk a lot, probably almost every conversation we have, comes around with this issue of a multidisciplinary approach. And a lot of times, it hearkens back to what we started talking about. Which is, what happens when the engineers are allowed to roam alone and just run with ideas? Just to be able to prove points and build things that don't have the view of context? And so we're talking about making sure that we have people in the room that can speak to the business need, that can reflect the consumer perspectives that deal with ethics and psychology and these elements.
But what you're talking about here, you tell me if I’m right, you're talking about neurobiology. You're talking about psychiatry. Not to try to figure out, as we've done with neural networks, to use the structure of the brain and how it works to help us figure out how to design it. But to figure out what the impact is in the world.
MIEKE DE KETELAERE: Yeah, so that's really specific to these chatbots that have been created initially to create relationships. But what we've seen - and that's something I didn't expect to happen - is that even ChatGPT, a simple platform like ChatGPT, can create that reaction. People meet somebody on the street and then they go to ChatGPT, how can I address this person? I think I like her. And then ChatGPT gives all types of ideas.
Well, wait a minute. Actually, in the weekend with a friend, he's into technology, but he's a very balanced friend. Any question I asked him, he said, let me just ask ChatGPT. And after four or five times, I basically said, look, just please put your phone away. Where is my friend who was able to answer and to think and to debate and to accept feedback and to discuss and to have a human type of conversation on which we might disagree, but then, at the end, we get out stronger, both of us?
With GPT, I don't have these types of conversations. I get an apology when I tell him that he's wrong. I'm sorry, I apologize, I got it wrong. Sorry, I got it wrong. The slider between these platforms that want to be helpful and not harmful, the fact that these are just mathematical variables that have been put somewhere in the algorithm, the outside world doesn't know that. That's somewhere that we can just even tackle these sliders.
But it's just two variables: harmfulness and helpfulness. And we put it a bit higher here and a little bit lower there and a little bit higher there. But this is not how you…we're not talking here, Kimberly, how was your helpfulness input today in this conversation? No. You have empathy. You try to look at the state of mind of somebody else. You try to collaborate in order to sort out whatever you have to sort out. This is not how these AI dialogues work. And people will forget about this.
Our next generation, they already have more difficulties with all the tools that they have in their digital devices to have a human conversation. But I tell you, try to play Monopoly with a 16-year-old. That used to be a very fun activity at home. It's horrible, yeah, because as soon as, they don't even, they can't even see that you have a strategy, that you actually try to combine it with this and this. This is so cool to do with all these skills. ChatGPT will never do that.
KIMBERLY NEVALA: Yeah, this is interesting. Now, I want to take a few minutes and talk about sustainability. Because is the way we're deploying AI systems in the context of, for instance, of the companions is that a sustainable societal good and approach? But we also really want to look at the environmental impacts of this.
And I think what we have certainly learned is that if it doesn't start out with an eye to sustainability or to being developed from the get in a sustainable way and how we're going to maintain in a sustainable way, it's very hard to then pull back and shift the focus, shift how we do things.
And that seems to be playing out a bit today in AI. Especially, or at least, with the level of the hyperscale vendors. Where we've realized these have huge impacts, have huge energy, have huge water needs. And while - in some cases, I mean - there's certainly techniques coming around to try to reduce the energy consumption, it is optimization of energy consumption as opposed to minimization.
So when you are thinking and talking about sustainability, is that a fair encapsulation of what we are currently focused on? And is that the right way to think about sustainability in AI?
MIEKE DE KETELAERE: Yeah, well, there's a couple of points, I think, starting to think now about sustainability and then start working on AI, the train left the station.
KIMBERLY NEVALA: Yes.
MIEKE DE KETELAERE: AI is out there, and it's not sustainable. The fact that as soon as you use AI, whatever format, it is negative towards planet, because it will either use battery or it will use electricity. It's something which is unknown to people. In fact, it started already before with the digitalization. Sending a thank-you email might sound very lovely to a person where you just had a meeting with, but it's bad for planet. It's data transfer.
But AI goes one step further. It's not just a huge data transfer that's happening. It's also the algorithms themselves that consume a lot of energy. So the fact that it's hidden, and we just talk in our phone, but what's happening in the phone and how this works has never been explained. So that's one point is that it comes with transparency on where the energy consumption is. And I'm not only talking about the usage. I'm not even talking yet about the materials used for our mobile phones and the materials used for the big data centers.
So next to this, transparency is also a fact that nobody is challenging the big tech, the hyperscalers, because, indeed, you mentioned it already. There are techniques available with which you can easily, with 40%, 50%, reduce your AI consumption - energy consumption, sorry. And this goes on different levels. This goes from the hardware you're selecting to use, and certain hardware will be much less consuming, for example, the new brain chips that will not analyze every pixel on an image, but just the spikes. Very nice, very techy, but very nice.
But we also can do it at the level of the algorithms. A lot of projects today start with the idea, I need to train my algorithm - no longer with the large language models, but with the local-based projects. Well, the fact there is a lot of pretrained algorithms are out there. Take a pretrained algorithm. When we do something with our brain, we continue to build on what we already know. So why would we, with AI, always start from scratch? Simple fact.
Also, the fact that we're oversizing our algorithms. Now when we have our brain, our brain cannot be oversized. It's protected by a big bone around it. So that means what our brain does very fantastically, it just lets die out some neurons, which are no longer needing energy. Now, if you ask me a question about my fifth grade on history, I wouldn't know. But I reactivate new parts of energy that I need now.
Well, our oversized neural networks that we built, there's easy techniques like pruning that you can do to take away all those parts in this huge network that actually don't put any uplift to the accuracy at the end of your result; a little tiny, tiny, tiny bit, but you wouldn't even see the result.
So if you take them out, that already reduces a lot the amount of calculations you make. Again, very techy, but the technology is there. Just make sure that you have engineers around you that know it. Knowing where you put your data, when you train your model, people still have the belief that training their models is better overnight. That's from the history where energy was cheaper overnight. Today, energy is greener during the day because of the solar, et cetera.
So all these facts, I'm not going to explain them all, there are more than 40, 50 factors you can actually push in order to make sure that your assessments are more energy-efficient. You can only say, but people tell me, but how come we don't know about that? But that's, of course, because that's not the profit model of the big players. So of course they're not going to push this. And the press or the media don't find it exciting enough to talk about energy-efficient methods. It's very techy. So that's one point.
And then I think a second point is that we are trying to continue that work on neural networks and new ways of going in that direction. While, actually, you can go outside in nature, and you see a lot of intelligence of plants that don't have any neurons. So you could ask yourself: are there any other forms of intelligence that we might just try to do?
And indeed, in the past, we had these type of forms of intelligence, but they sort of faded out - genetic algorithms, et cetera. But they weren't good enough. And so somehow, this neural net AI - because there's different flavors of AI - has made it because that's the trend we followed the longest. But I'll tell you, there's many more intelligence out there that isn't based on neurons.
I sometimes look into the air, and I see, for example, how birds are flying. And one is coming in front of the other, and then the other one is just mixing it out again. And I go like, wow, 100 birds all flying together, no collision. And I try to get four drones in the air in my lab. They're already colliding.
All these things are like why don't we just look a little bit more at intelligence that we don't have in our brain? Because, yes, we can talk. So why are we trying to replicate human dialogues? Yes, we can make pictures. Yes, we can draw. Yes. So why are we trying to put so much energy in these type of tasks that we already can do? Why can't we just switch to areas where the human intelligence isn't available by looking beyond the colors we can see, listening beyond the sounds that we hear, to make me see what's out there?
I mean, dogs can smell illness. So why don't we just focus on that? And so this is my point is we're in the age of replicating human brain. I'm coming almost back to where we started. We already know it's not going to be possible. So why don't we just drop this idea and we move to looking at the intelligence in nature that might help us. First of all, a lot more with global warming than what we currently do. But also it will help us in having a wide form of intelligence to solve problems that we need to solve. But that's, of course, my personal opinion.
KIMBERLY NEVALA: Well, there's a fascinating discourse in there on the value of diversity. Where we right now have gotten down to the point of focusing on one unique goal, I think, which is replicating humanity. Which, to your point, I think that's a really fair question.
MIEKE DE KETELAERE: It's Homo Academicus on the top of his pyramid saying, I'm the intelligent one. That's what we do. We try to replicate the human brain because we are intelligent. We can move. We can fly. We can talk. We can think. We can reason, et cetera. But how arrogant is this?
Now, we look down on animals. We look down on plants. We look down on anything else. You know, sorry, look at what we've done for the planet. I don't need to look at a lot of things that happened in the recent weeks. Look at what we do in certain parts of the world towards humanity. How arrogant can you be, then, to replicate that human brain?
KIMBERLY NEVALA: Yeah and a much more pedantic point there is that there's probably a lot of value - even if you said that this was, to some extent, the goa - there's a lot of other components and perspectives. But for organizations as well, I would say that value in diversity is not just in the perspective, but there's value in them diversifying their approaches, their platforms, the techniques that they're bringing on board. Because it also potentially opens up a wider range of problems they can solve and to solve those in a very effective and sustainable way.
MIEKE DE KETELAERE: No, absolutely, absolutely. And there's so many things we don't know. At a certain point, I really got stuck in my head that we are going in the wrong direction. I told you we're bringing many more uncertainties in life than we are taking out so I really started to read all these books on intelligence in nature.
And then you see things like Chanel No 5, which is my favorite perfume. Chanel No. 5 in certain labs, you're not allowed to get in when you have Chanel No. 5 on you because it puts all plants in defense modes. I can tell you I've never touched this Chanel No. 5 again. All these little things that we simply don't know until you read about them.
And the funny thing was, all these researchers on natural intelligence, they all said, sorry, there won't be a second book because we don't get money. There is no research funding around (this). I'm like, how strange is this? Tons of research money for creating chatbots to create artificial relationships. But then, for people understanding human nature, sorry, natural intelligence, there's no money. There's no money for it. And even if they'd understand much more what's needed. I find this a weird world, very weird.
KIMBERLY NEVALA: Yeah, it's fascinating. And I love this idea of just really expanding our minds about what it is that we are trying to achieve and where we might find inspiration for that. Not as a panacea, but as another way forward.
So, as you're talking to organizations, to individuals, in all of the varied work that you do, what would you leave with this audience as the key thought or takeaway to, as you say, ensure that what we're doing today makes us good ancestors tomorrow?
MIEKE DE KETELAERE: That's a fantastic question to end with.
Look, I think we have to believe and continue to believe that AI is a fantastic technology. I mean, that's been proven in many use cases. So, I don't want to be ending in a negative mode, because it's a fantastic technology. But I think we just need to dare altogether to turn around the cake.
The cake today is the bottom layer is profit. And then, about eight years ago, we looked at the impact on people. And it's just lately, we have a tiny little topping off planet. So why don't we just turn around the cake and we just make sure that our planet is not tackled? So, we are aware of all the things that you can do to decrease your energy consumption. Then we make a very nice framework to protect society. And then please, go for your profit. It's what I do as well. I use ChatGPT to decrease number of hours to create text or to translate text but only in the right order.
And I think if we got this as sort of, I call this my 10 commandments for trustworthy and ethical AI, I think we can have a very balanced environment where AI and human go hand in hand. And this tension on the line will just decrease. And today I just see it increasing.
So that would be a fantastic way to end it. It's just let's turn together. Let's turn around the cake, just like we did with the car. Now we're all using the car. It's a fantastic technology. But let's do the same task also for AI.
KIMBERLY NEVALA: That's great. We are going to end on that, what I see as a very positive, truly positive challenge. And for everyone listening in, we will provide links in the show notes to some of those resources, including the10 commandments for AI adoption that provides some very tactical and tangible tips related to some of these more expansive ideas that you've brought up today. Thank you again for your time today. This has been fascinating.
MIEKE DE KETELAERE: Thank you for having me. Thanks. It was great.
[MUSIC PLAYING]
KIMBERLY NEVALA: Alright. If you'd like to continue learning from thinkers and doers and activists, such as Mieke, you can subscribe to Pondering AI now. We're available on all your favorite podcatchers and also on YouTube.