Born & Kepler is named after the mathematician and scientists Max Born and Johannes Kepler. This bilingual podcast, offered in both German and English, dives into the expansive world of Artificial Intelligence (AI), exploring its foundations, evolving technology trends, academic search, and its impact on businesses and society.
Born & Kepler will feature a diverse lineup of experts from academia, venture capital, private equity, journalism, entrepreneurship, CTOs, and policymakers. Each guest offers unique insights into how AI is reshaping their sectors and what we might expect in the future.
Our goal is to provide a deep understanding of the core principles and breakthroughs in AI, enabling you to stay updated with the latest advancements in AI technologies and how they are transforming industries. During our episodes, we will explore how AI is influencing business strategies, optimizing operations, and driving innovation. We will also explore the ethical, social, and regulatory aspects of AI in everyday life.
Andreas Deptolla (00:01.294)
Joseph, welcome to the show.
Josep (00:03.487)
Thank you. Happy to be here.
Andreas Deptolla (00:05.238)
Am I, are you at your home in Barcelona right now or in New York or where in the world am I? Very nice, perfect. So yeah, to start off with, would love to hear about your academic journey, specifically like you spent a lot of time in the United States, at MIT and other places and your academic journey and specifically what shaped you, right? What were the pivotal moments for you?
Josep (00:08.745)
Yes I am. I'm at home. Yes. Correct me.
Josep (00:30.517)
Okay, well, thanks for the question. I started at MIT in 1978. So that tells you how old I am. And I started my PhD in operation research at MIT. Basically, operation research is as you know, applied math. So mathematical models, building mathematical models for engineering situations at the time. And then...
Andreas Deptolla (00:45.122)
Mm-hmm.
Josep (01:00.231)
When I graduated and I got my PhD, I got interested on health related issues, basically modeling physiological systems. And I applied for a postdoc at the time at MIT. And MIT did not have money for postdocs at the time, but they had money for a
people doing another PhD. So I ended up with two PhDs from MIT, which is quite rare. But that was a circumstance that I had no money available for doing what I wanted to do. So they provided me a scholarship to keep studying. And that was fun. So I learned how to model.
physiological systems, to model respirators, how to model these kind of medical devices. And that was fun. But at the end of the I never used any of these things because I ended up at the management school. So I ended up teaching first applied math at the ESE business school. And then I helped fund the
Information Systems Department within the school. And then we with all the digitalization and all the things after 2000, we created the innovation. I shouldn't call it innovation, but the department that has been dealing with digital information and how to apply digitalization in businesses.
That's basically my trajectory.
Andreas Deptolla (02:59.938)
Yeah, it's interesting, you you mentioned the funding and how that, you know, put you on a totally different track, right? And live to get another PhD. So, yes, he obviously has a great brand here in Europe, right? And you have first had the experience both in a European business school, right? And then in the United States, how would you describe maybe the differences between Europe and...
Josep (03:09.524)
Correct.
Andreas Deptolla (03:27.108)
the United States from an academic, from a business school perspective.
Josep (03:31.041)
from the business school perspective, which I know a little bit more, the difference is currently, I mean, I shouldn't be so, I would say so stringent on this thing, but currently the difference, I think it is that most of the American schools are still very much
research centered in the sense that research that very few people read, let me tell you that. They are self-fulfilling academics which they generate clusters. I'm a little bit cynical on this, but let me tell you this. I think that the so-called top journals,
are managed by people that self-reproduce themselves. Let me put it that way. So that the only reason is that it is true that we need some basic research. We do. We do need some basic research on management. But oftentimes, things that get published in these journals
are un-understandable by managers. So the application is, the dilemma between rigor and relevance, I think in American schools is very much to the rigor and they've lost the relevance. Whereas I think in some of the European schools, relevance is a lot more
relevant, let's it that way. that applications and bringing it down to earth and helping managers. And that's why I believe that there are a number of schools in Europe that are in the top of the rankings for executive education.
Josep (05:57.051)
more than undergraduate or MBA education. And that's because I think clients, I've been in charge in my particular, let me tell you my story. I've been in charge of executive education at ESA for a number of years. And we do have a number of clients that just come to buy.
Andreas Deptolla (06:12.483)
Mm-hmm.
Josep (06:26.421)
It's not that we go out and sell. There are a number of requests that companies come because we think that what we provide is application-based solutions for those companies that are seeking actually education that's applicable tomorrow morning, not that is applicable 10 or 15, 20 years from now.
In that sense, think that's a big difference between US and Europe.
Andreas Deptolla (07:02.372)
It's surprising to me to hear. would have assumed that Europe, maybe in Germany, it's typically more research focused compared to the United States. you're saying that specifically about the management track and not just in general.
Josep (07:21.733)
I'm talking only about management applicability. I'm not talking about anything else. Management applicability, I think it's crucial. that's my only point, which is probably, I don't know whether, I know that a number of my colleagues will not agree with me, but that's my view.
Andreas Deptolla (07:25.529)
Mm-hmm.
Andreas Deptolla (07:46.724)
Well, that makes it interesting, So if you are looking at, said like, hey, there's not all that much applicable research about management these days, right? Are there certain books that you recommend or papers, studies, interesting research that you think are relevant for managers?
Josep (08:12.243)
Well, for instance, there are a couple of things. If you want to talk about things like basically, say, the things that people are very much interested in these days. So it all depends on how rigorous you want to get on the AI arena.
be something that's very novel-esque or very, you know, sort of kind of storytelling. I would go with, you know, even Herari's book, which I think is interesting, the Nexus book, I think is interesting. If you want to get real understanding of what it is, there is a book by...
Andreas Deptolla (09:01.666)
The Nexus book, yeah.
Josep (09:09.833)
Let me see, I had it here somehow by the MIT guys, which is artificial intelligence, artificial general intelligence from the MIT press by a guy called Togellius, or Togellius, Julian Togellius, which is from a book from 2024. And I think it's very interesting. If you want to get.
technical understanding what in fact, what the algorithms do and why this thing. If you want to get into the, you know, divulgation kind of thing and futuristic and a little bit of do me why AI is going to kill us all, then I guess Harari's book on Nexus is an interesting book as well. But, you know, or if you want to really understand
There is a book by a guy by the name of Carthwell that wrote a book called Singularity. Now he says the singularity is nearer. The second book is called, his first book was Singularity. Now the second one is Singularity is nearer. And basically singularity means that you will have general purpose AI. Like general purpose technologies are technologies
Andreas Deptolla (10:18.244)
Mm-hmm.
Josep (10:38.983)
that like electricity, that you can use it for many different things. So this is general purpose technology, the steam engine that can use to generate traction in locomotives and can be used to generate power in power stations. if AI becomes singular in the sense that general purpose,
then that's gonna be a complete breakthrough in AI. AI currently is very specific. If you get the world champion of Go, you know that there was this algorithm from Google Brain that won Go. This algorithm will lose a checkers. So this algorithm...
Andreas Deptolla (11:17.954)
Mm-hmm.
Andreas Deptolla (11:26.348)
The game, yeah.
Andreas Deptolla (11:35.556)
Hmm.
Josep (11:36.393)
cannot even play checkers. So that's why specific of these algorithms are. So there is no point on making an algorithm to work on something that has not been designed to solve. Even those large language models that we are, such as GPT and wherever, they can only do whatever they can do.
they will not do something else. So in that sense, singularity is the key. If we get singularity with general purpose technologies, then we are gonna be perhaps in trouble, I don't know, but who knows.
Andreas Deptolla (12:23.768)
Yeah, and you're kind of alluding to the current applications of AI, maybe some of the limitations, right? What, from your perspective, are realistic applications of AI and where are maybe our expectations overstated at this point?
Josep (12:48.041)
Realistic applications are again specific applications like image recognition. Image recognition is quite good. You can train algorithms to, but in the very specific things, can train algorithms to recognize x-rays for instance, and pick up from x-rays broken bones.
Andreas Deptolla (13:15.832)
Mm-hmm. Mm-hmm.
Josep (13:17.279)
So, and it will not miss one, it will do fantastic diagnostics on recognizing head raise. Now, that same algorithm, if you ask it to recognize faces and it will not be able to distinguish a white man from a black man or a black person from a white person, it will not be able to.
because you will have trained it to do something very specific. So image recognition in medicine is extremely important because it really takes out of the way a lot of simpler cases like studying, for instance, pathology slides where people do the little cuts to look for cells for tumors or whatever.
Andreas Deptolla (14:13.941)
Mm-hmm.
Josep (14:14.773)
It's very simple. Then this thing recognizes it and doesn't miss one. It works very well. But obviously, it can only do that. It will not do something else. So in that sense, image recognition, I think, is great. It's fantastic. Now, when you put there other sociological biases,
like or biases, things that are susceptible to be biased. For instance, you look at the faces of people and say there's a very famous case on record that they tried to see whether somebody would commit a crime again. And just by looking at their faces and
Andreas Deptolla (15:06.816)
Mm-hmm.
Josep (15:13.365)
their backgrounds. Although, even though grace is not, was not a factor, a factor for for full classification, grace was not, but many other things were like where the person lives, what kind of society he or she is involved in, what kind of friends or associates that he or she has, and
Andreas Deptolla (15:32.824)
Mm-hmm.
Josep (15:42.953)
That alone will just classify people. And there are cases in which these algorithms have been proven to make lots of mistakes. Because it oftentimes happens that many of these first time offenders, they collapse the judicial system.
So they have an algorithm that just screens people and only complicated cases that cannot be decided get sent to a judge and all the others, they just get processed automatically. And that what happened is that this thing just made lots of mistakes. And I would not trust an algorithm to decide whether I will be a second time offender or not.
That's what people do. Systems do that because many judicial systems are so collapsed that they need help. Anyway.
Andreas Deptolla (16:54.724)
And I think that there's this entire ethical dilemma with what you're describing. Before we go there, maybe on a more fundamental level for our audience, you mentioned that AI is really good in very specific use cases, right? Whether it's now playing chess or playing Go or understanding X-ray images and determining whether the bone is broken or not.
Are these then essentially each of these cases you would program like a language model for? And that's unique purpose or how is this done on a technical level?
Josep (17:34.557)
No, usually this is done with a technology called bug propagation. So basically what this is done is, let's say that this is a very simple case, in which you want to decide whether an email is spam or is not spam. And then what happens is that
you or the algorithm reads the email and makes a decision whether that's spam or is not spam. Okay and it will at the beginning with no training it will just make lots of mistakes and then a human looks at it and says no this is not spam so then every single weight in a neural network you know how neural
This is organized like a neural network. in which each node spans like whatever five, 10. In a brain, usually what happens is that you have a neuron that has dendrites. And then that's how they get the input. And then an action comes out. And then that action.
Andreas Deptolla (18:52.836)
Mm-hmm.
Josep (19:00.635)
excites another neuron. So that's how this thing works. so when you train an algorithm and you say this is not a spam and the algorithm said it was spam, then everything that made, every weight that made that
algorithm to say that this was a spam gets reduced a little bit. And everything that said that it would not be a spam, it gets reinforced. that you train the algorithm by showing them many, many, many emails and having the algorithm make mistakes. And then you reinforce the ones that they
guess well and
de-reinforce or whatever it is called, you reduce the weights of the ones that made it make the mistake. So that's how you train the algorithm. And when the algorithm is trained, it's quite good, in fact. But then you have another dilemma, which is what do you want? Because it will make some mistakes. These algorithms will always make mistakes. There's no question about that.
Andreas Deptolla (20:05.987)
Mm-hmm.
Josep (20:28.009)
And then the question that you have to make a decision is, is what kind of mistakes do you want to make? Do you want to make more mistakes in the sense that calling it not spam an email that is spam or you want to go to the other way, calling it spam, something that is not spam. And then, you you get your email from your boss and it's called the spam and it goes into the spam folder and you never see it.
or you wanted to have to deal with some spams in your main mailbox. So those are type one and type two errors that you have to balance. And depending on how you train the algorithm, so it will make more or less mistakes in one side or the other. So that's how you train the algorithms. It's just by showing them many, many, many things.
and telling them you are right, you're wrong, you're right, you're wrong. And then the algorithm readjusts itself, the weights toward the proper solution. That's how it's done.
Andreas Deptolla (21:38.37)
So then in the example of the X-rays, the more data I feed it, the better it gets. I showed like positive and negative cases. What was interesting, the examples that you gave was working really well. It seemed to be more binary. Broner's Boken, yes or no, versus the example of the criminal is essentially a prediction of the future. It's a lot more complex.
You also mentioned some of the ethical dilemmas, right? You can see, right, whether this is now for hiring, right, or predicting who will commit crime, right? Or I think you and I talked about, you know, self-driving cars, right? And there's an accident, right? Who's now liable, right? How do you think about these ethical dilemmas?
What can we do as a society really to find a framework here?
Josep (22:45.225)
Well, this is a very, complex question. But, you know, the first thing is that in my understanding, okay, this is me thinking, me talking. don't, is that when an algorithm makes a decision, it has to be explainable. So you have to understand.
Andreas Deptolla (23:10.82)
Mm-hmm.
Josep (23:13.829)
why the algorithm made that decision. In very complex, in extremely complex neural networks, like Chatt GPT that has one trillion weights, so it has one trillion connections, it's impossible to know how it got into that decision. So talk about medicine, for instance, medical applications.
image recognition. If you cannot explain why an algorithm said that this has a broken bone or this slide from a pathology is a cancer cell or it's not a cancer cell. If you cannot explain that, no one is going to take that seriously. You have to show why
the algorithm decided that this is a tumor cell or it's not a tumor cell. If you cannot do that, the chances that this thing will get forward or will get used is very minor. So the first thing is plentability. And then there are others, you know, there are other characteristics like it has to be reasonably fair. So in the sense that
that if you show them 10,000.
slides from a biopsy, it has to be by and large mostly right. You know it's going to make some mistakes, but it's going to be 99.8 % correct. Otherwise, this thing is reasonably useless. But that doesn't solve the issue of accountability. So who's accountable?
Josep (25:19.957)
Who's accountable? The person that wrote the system? The person that trained the system? Currently the accountability resides on the person that accepts the result. That's how accountability is currently. So, you know, if you have a self-driving car,
Andreas Deptolla (25:35.822)
Mm-hmm. Mm-hmm.
Josep (25:45.967)
and you are paying zero attention and you are reading the paper and you hit someone, it's your problem. That's not because you were reading the paper as opposed to paying attention to the row. that's currently how it works. It may change, perhaps, if algorithms become better, but currently,
the responsibility is on the driver, which is not the case in, if your brakes don't work. If your brakes don't work, the responsibility is on the car manufacturer. But currently, the way this thing works is in the person that faulted.
Andreas Deptolla (26:30.336)
Right. Manufacturer.
Andreas Deptolla (26:40.036)
Mm-hmm.
Josep (26:44.745)
the fellow slip at the wheel because you know and you know the thing was driving by itself and you hit someone that's your problem.
Andreas Deptolla (26:53.08)
You know, we see self-driving car example, it would be quite interesting to see how are the insurance companies lobbying this in the next five years or 10 years, right? Because like at the end of the day, they will have all the data, right? And they will say like, hey, listen, like if it's self-driving car, like the probabilities of an accident is lot lower, right? I don't have, you know.
Josep (27:14.517)
It's a lot lower, a lot lower.
Andreas Deptolla (27:15.946)
I don't have somebody that's driving or driving drunk or tired about not right, all these things you can take out. So I think that that would be an interesting predictor of...
Josep (27:28.469)
But the issue here, again, if you don't want to talk about self-driving cars, that you know that there is this... Clearly, self-driving cars are lot safer than... Because it has been proven in millions and millions of driven miles. The issue is who's going to die from an accident and who's going to not die.
Andreas Deptolla (27:43.501)
Mm-hmm.
Josep (27:57.941)
because the example I use in class for this thing is clearly you're going to save a lot of drunk drivers because drunk drivers will just jump into the car and the car will drive away itself and no problem. But if an old lady with vision impaired jumps in front of the car and gets killed by the car,
So who are you killing, quote unquote, and who are you saving? If you are saving the drunk driver but are killing the vision impaired lady that jumps in front of the car because she didn't see the car coming, I mean, is that fair? What is the fairness here? You are saving the drunk driver but you are killing the old lady with impaired vision and impaired hearing because it jumped in front of the car?
Andreas Deptolla (28:32.59)
Mm-hmm.
Josep (28:57.489)
It's peculiar because, you know, so this is not necessarily an easy solution, ethically, I don't think. But, you know.
Andreas Deptolla (29:10.34)
Hopefully this is one if self-driving cars really enhances the security standard overall, hopefully we'll not get these super edge cases to hinder the overall progress in this. So to your point, we got to figure out some kind of a framework to make these kind of decisions. But hopefully this is not crippling our progress.
Josep (29:36.029)
No, absolutely. But I like your point of view of the insurance. The insurers are the ones that are going to solve this thing. Because somebody is going to decide what is it that they want to insure and not insure. And they have billions of data points that will sort of solve it. I'm pretty certain that thing. Yeah.
Never occurred to me, but I think the insurance are going to be the guys that are going to fix it.
Andreas Deptolla (30:10.532)
So now if we look at some of the stuff that we discussed, self-driving cars, the x-rays that now enable the medical doctors to maybe make better decisions, these are all positive cases. Let's now look at, I'd love to hear your opinion on some of the potential threats. So I read somewhere that about 80 % now of
the unquote people on social media are now AI bots. I don't know if that number is correct or not. But it's a huge potential for providing or supporting fake news to influence political decisions, elections and whatnot.
maybe to narrow it down a little bit on the fake news side. What are you seeing and what can we do in order to protect our society?
Josep (31:24.617)
That's a very complex question. But let me try to reason a little bit on the roots of the issue here.
Andreas Deptolla (31:35.268)
Mm-hmm.
Josep (31:39.495)
These algorithms that select what you read are designed to enhance traffic basically. that generates what is called, technically, information bubbles in which you only see
what you like to see, because if you see what you like to see, you stay longer in the platform. And therefore you have more clicks and you have more opportunities to be advertised and because you stayed longer in the platform. So that you only see what the system believes that you want to see. And the systems are quite good. So you only see
Andreas Deptolla (32:13.944)
Continue watching it.
Josep (32:37.695)
the things that you want to see. And because you stay longer and the longer you stay, the better for the platform. And that's what it is. So these information bubbles do exist. So that's the central root of the issue here. And that generates, especially for kids, by the way,
terrible situations because you know it creates stereotypes you know all the I see with my grandchildren the girls only see you know beauty contests and things like you know the stereotypes of how great those those makeups are and you know and then
Andreas Deptolla (33:22.052)
Beautiful woman,
Josep (33:33.897)
The boys only see sports and so it's interesting that it creates those stereotypes because the algorithms are designed for people to stay longer in the platform. So that's the initial part of the thing. Then if you take that into the political arena, generates even more.
troublesome situations because you know if you are into say the far left or not to say the far right because everyone talks about the far right but but the same happens in the far left. There is no is no no difference on this thing. You know they people like to read things that are more extreme and
Andreas Deptolla (34:14.262)
Mm-hmm.
Josep (34:31.092)
That's why the more extreme news become more appealing for people that are in that area of the spectrum. So the more that you like to read extremely conservative
things, if someone creates a fake news that is consistent with what you would like to expect, then you stay longer on that thing. So it doesn't really have to be... And then it doesn't have to be anything related to the truth because any news item that is consistent with what you would expect it to be...
You like it. And therefore, you stay longer on reading that thing. And you stay longer and you are more susceptible to being advertised and whatever. So, news do not appear because of out of the blue. Fake news appear because what they force you to stay longer into the area that you would like to be.
So it's not a construct, saying people are evil, pseudo evil, but it's just a will that reinforces itself. if you create fake news in one extreme or the other, that will create people, that will let people stay longer in the platform.
then people will stay. Now the question is, what can platforms do to make sure that those fake news don't exist? But this is very complicated because, you know, how do you know whether somebody said something or did not say something? Because now with AI it's very simple to create an algorithm to having me say something that I've never said. So how do you know?
Josep (37:00.627)
that this person did say it or did not say it. And if someone makes a mistake, quote unquote, a truthful mistake by saying something that should not have been said, and then it says, no, I didn't say that. How do you know what part is true and what part is fake? The one that says that he did not say what he said he had said, or the first part.
So it's very complicated to know what's true and what is not true in this day and age. So.
To me, it's very hard to solve this conundrum. I don't think it will be solved anytime soon.
Andreas Deptolla (37:52.076)
I think what's really interesting to me about this is...
A lot of people talk about AI or the threats of AI, like, know, AI will take over and like, you know, destroy humanity or create wars or whatnot. I think what you are describing right now, that's really the challenge at hand, right? It's really twofold. One, what you mentioned earlier, it's about like what impact will social media in combination with AI have on our kids on the next generation, right? By seeing constantly these beautiful
pictures and I think that that is largely clarified right if you if look at books like you know the anxious generation that there's no doubt that there's a strong correlation now between depression and Erexia and all of the things and use of social media and then the second question is you know is there a threat to Europe
when it comes to the political spectrum, right? Because it's so easy now for Russia, for China, whatnot, to create now 10, 20,000 bots, right? Because there is no regulation. So I think your last point was like, can we do? And let me maybe propose one thing, like, think really two things to discuss. Number one is none of these platforms, right?
talk about X, TikTok, what not, none of these are European, right? I think that that's the first issue at hand, right? So there's no control and should the government have a role in that? And the second one is, you know, I think everybody can create all these bots, there's no regulations. And of course, Facebook and the likes do not have any interest in solving that because that's how they make the money, right? But could we, could the government say in order to get an account, you get to authenticate yourself, right? You get to...
Andreas Deptolla (39:44.93)
you know, show a passport or driver's license and whatnot, right? And maybe that's the central function that, that validate these things.
Josep (39:53.717)
That's a very good point. But I have no idea on how this could be implemented or even faked because the process of creating an account on Facebook, for instance, currently still requires that you have an email.
and it requires that you have a phone number and you have a validation. But probably this could be fake too. So I'm not so sure how this could be implemented in real life.
Andreas Deptolla (40:23.299)
Mm-hmm.
Josep (40:48.467)
I don't know, but maybe if you had to have a picture of yourself with an ID and a passport and whatever.
Andreas Deptolla (40:54.542)
Mm-hmm.
The banks are doing it, right? If I now want to create like a new account, a trading account or whatnot, right? That's exactly what I have to go through, right? know, they take a picture of my driver's license, right? I got to talk to somebody. Nothing is perfect, right? I think it will always kind of like escalate, right? But you know, we got to do something, right?
Josep (41:18.997)
That clearly would solve the 10,000 bots that apparently the Russians have created just to interfere with the Bulgarian elections or whatever it is.
Andreas Deptolla (41:26.724)
It was election, yeah. And it could also, like, you you outlined that the first point was social media, right? And I think that the brains are specifically malleable, like, during puberty in those years, right? So now could the European government say, like, you know, in order to use these kind of social media, you have to be 16 or other regulations, right?
Josep (41:56.309)
Well, now apparently they just passed a law in which you have to be 16, as you said, to be able to use Facebook or TikTok. And companies will have to implement age verification far beyond what they are doing right now, which is zero. So probably.
Andreas Deptolla (41:56.344)
But mi-mi-mi-
Andreas Deptolla (42:03.588)
Mm-hmm.
Josep (42:25.845)
parental approval could also be an issue or a possibility if you need parental approval to open an account from an adult so I don't know we'll see but none of these things are full proof some of them some of them are probably can be faked as well so I don't know
But the stuff...
Andreas Deptolla (42:55.81)
And maybe this is not a problem for us to solve. Maybe there are some entrepreneurs, right? Like I think this could be very interesting thing to solve for, right?
Josep (43:06.011)
clearly, very interesting research area. If we have a solution for this, whoever finds it and cracks that nut, it will make a lot of money. I have no question about that. There's no doubt in my mind that whoever solves the issue about fake news and detecting fake news and detecting these things.
will make a lot of money if somebody is able to find a way to detect fake news. I did some research a long time ago on the ability to detect fake news. And our results were quite interesting. For instance, the further away you are from, you know, we showed
news that were faked and fake news and news that were true to many, many people. And people were self-classified about whether they think they were right leaning or left leaning or whatever. And what we found is that
The further away you are from the fake news, the better your ability to detect fake news. So if you are, say, right leaning, your ability to detect fake news from true news diminishes dramatically. If you are left leaning,
Andreas Deptolla (45:00.452)
Hmm.
Josep (45:02.897)
your ability to detect fake news from true news on the right increases dramatically and vice versa. distance increases your ability to detect fake news. if, I don't know, but we can.
Andreas Deptolla (45:23.198)
What, is that?
Josep (45:31.637)
sort of guess why this is. But my guess, it's only a guess because I'm not a psychologist, is that if you are very much into
Josep (45:49.237)
your area of comfort, your ability to differentiate fake from true is a lot less than if you are further away. So the self-proclaimed conservative people in our sample, Republicans and I did that in the US when I was in the US, the self-proclaimed
Andreas Deptolla (46:15.758)
Mm-hmm.
Josep (46:18.333)
Republicans had a much better perspective on separating news, fake news, and true news from the Democrat side than vice versa. The self-proclaimed Democrats had a much better perspective to separating fake news from true news on the Republican side.
And so perspective helps. So the further away you are from this area of fake news helps you to separate through from fake. So we found that quite interesting, which makes sense, by the way, because when you are into this swarm of
Andreas Deptolla (47:06.884)
This thing,
Josep (47:16.445)
of things that are consistent with what you think it should be, you have the ability to differentiate the miniatures dramatically. Whereas if you have perspective, you look and say, that cannot be true.
Andreas Deptolla (47:33.208)
Yeah, I mean, in the case of you want things to be true, right? So you already have a bias there.
Josep (47:38.277)
Yeah, you want something to be true. That's probably the thing. You want something to be true and because it's consistent with what your belief it should be true.
Andreas Deptolla (47:52.612)
Joseph, as I'm kind of like reflecting on our conversation from today, I always think about, like, you who else should I invite to the podcast? Two things come to mind. One is Ursula von der Leiden, right? Or somebody from the European Commission, like, you know, what are they doing, right? About regulating all this stuff. I think this will be really critical. The second thing is like, wanting other any startups or like interesting companies that are
Josep (48:08.008)
Yeah, yeah.
Andreas Deptolla (48:22.18)
trying to solve the puzzle of fake news on a systematic, problematic way, right? mean, there has to be technology. But I would love to hear from you. Who are people that you say, Andreas, you got to have this person on the show, you got to invite this, yeah.
Josep (48:42.739)
Yeah. You know, if you can get something like the guy that just got the Nobel laureate, Geoffrey Hinton.
you know, from Google brain. If you can get that guy, he's the prophet of doom, by the way. He's the guy that says that we are all in deep trouble because we don't know what we are doing. his perspective, I think, is very interesting because he's the guy that goes around saying that we are making lots of mistakes and we don't know what we are doing, that we are a
Andreas Deptolla (49:15.715)
Mm-hmm.
Josep (49:33.237)
basically making lots of mistakes with letting AI progress the way we are. that, think his perspective is very, very interesting. He has an interview from 60 minutes that you can look at to see whether you want to get him into the show or not. So you can find it in YouTube, 60 minutes interview.
Andreas Deptolla (49:50.03)
Hmm.
Josep (50:03.343)
And it's from the 60 minutes show, know, 60 minutes from CBS. So, and that's a very, very interesting talker. And he has a lot more perspective than I do because he has been working on this thing. He has been making a living out of creating those algorithms. So he really knows what he's talking about. And he just got the Nobel Prize.
Andreas Deptolla (50:10.254)
Mm-hmm.
Josep (50:32.725)
in 2024. so I would, I would, if you can get him, I would certainly invite him.
Andreas Deptolla (50:41.422)
We'll put them on top of the list. Dortha, thank you so much for your time today and all your insights. was really great to have you here on the show.
Josep (50:51.965)
Okay, thank you very much. It has been a pleasure.