LGBTQ+ Excellence Podcast

In this episode we speak to Andrea Paviglianiti - a data scientist and AI engineer from Bratislava. The discussion explores Andrea's journey into AI, ethical AI practices, and how AI must include diversity and be free of gender and cultural biases. Andrea emphasizes the importance of making AI accessible, trustworthy, and safe while sharing his insights on the current challenges and future directions for inclusivity and ethical standards in AI development.

About Andrea
Andrea Paviglianiti comes from a little town in Sicily, and spent the last 10 years living abroad, in Northern and Central Europe. Today he speaks to us  from Bratislava, in Slovak Republic.

Andrea works as Data Scientist and Artificial Intelligence Engineer in one of the leading IT corporations. His job consists into making Machine Learning and Artificial Intelligence up-to-use to his clients, towards useful and operational AI for business.

But on the other hand, Andrea is a man who likes to learn new things. Roughly speaking 6 languages, versed in non-verbal communication, and a martial arts practitioner, he loves spending time on humanistic fields and causes more than he lets through.

One of his latest personal projects revolves around Ethics applied to Artificial Intelligence. In his own words “AI has the potential to do good and bad things – it all depends on how we decide to use it and if we are thinking about what we are doing”. And so I hear, he is developing some online classes on the matter for solopreneurs, small business owners, and content creators who want to use AI for their business, but can’t afford advocacy from the Big Tech Companies.

Connect with Andrea
00:00 Introduction to the LGBTQ+ Excellence Podcast
00:38 Meet Andrea: A Journey from Sicily to AI Expert
02:16 Defining LGBTQ+ Excellence
03:16 Andrea's Path into AI
04:23 The Role and Challenges of AI
04:56 Ethical Concerns in AI
07:00 AI and Gender Bias
10:22 The Importance of Diverse Data
12:33 AI in the Workplace and Society
15:29 Future of AI and Inclusivity
20:22 Becoming an AI Consultant
23:20 Andrea's Aha Moment
24:18 Conclusion and Key Takeaways

This show is presented by The Queer Consulting Network https://www.queerconsulting.org/ and Produced by Rainbow Creative with Matthew Jones as Executive Producer, and Nathan Wheatley as Editor

Queer Consulting Network envisions a world where LGBTQ+ individuals are empowered to create thriving communities, build essential skills, and gain confidence through networking events and connections with relevant companies and people. We strive towards a future where everyone in our community feels safe, secure, accepted, and valued for who they are. Learn more at: https://www.queerconsulting.org/ 


Creators & Guests

Host
Ahmet Aydogan
Founder and Executive Director of Queer Consulting Network (Non-profit Organization)

What is LGBTQ+ Excellence Podcast?

Welcome to the LGBTQ+ Excellence Podcast, where we dive into the hidden tales of triumph, resilience, and leadership within the consulting world, all through the vibrant spectrum of the LGBTQ+ community. Hosted by Ahmet Aydogan, in this show, we delve beyond the conference room, uncovering the remarkable journeys and victories of LGBTQ+ professionals. From candid conversations with executive leaders to insightful discussions with industry pioneers, we explore the secrets of success and the transformative power of diversity and inclusion.

Expect surprising revelations, genuine insights, and the courage born from shattering barriers. This isn't just a podcast; it's a celebration of voices that ignite change, challenge conventions, and lead with authenticity.

Get ready to be inspired, empowered, and propelled forward in your consulting career. Follow the LGBTQ+ Excellence Podcast today and join us in amplifying these powerful stories of excellence.

This show is by The Queer Consulting Network (https://www.queerconsulting.org/) with Podcast Production provided by Rainbow Creative (https://www.rainbowcreative.co/)

Ahmet:

Welcome to the LGBTQ plus Excellent Podcast. The show where we unravel the untold stories of success, resilience, and leadership in the world of consulting through the vibrant lens of the LGBTQ plus community. I'm your host, Ahmed Aydogan. In this show, we will embark on a journey that goes beyond the wardrobe, exploring the unique experiences and triumphs of the LGBTQ plus professionals. In this episode, we will discuss ethical AI pioneering diversity and inclusion in digital age.

Ahmet:

I'm joined by Andrea. Andrea comes from a little town in Sicily and spent the last 10 years living in abroad, in Northern and Central Europe. Today, he speaks to us from Bratislava in Slovak Republic. Andrea worked as a data scientist and artificial intelligence engineer in one of the leading IT corporation. His job consists into machine learning and artificial intelligence up to use to his clients towards useful and operational AI for business.

Ahmet:

But on the other hand, Andrea is a man who likes to learn new things. Roughly speaking 6 languages, worst in nonverbal communication and martial art practitioner. He loves spending time on humanistic fields and causes more than he lets through. One of his latest personal project revolves around ethics applied to artificial intelligence. In his own words, AI has a potential to do good and bad things.

Ahmet:

It all depends on how we decide to use it and if we are thinking about what we are doing. And so I hear he's developing some online classes on the matter for solopreneurs, small business owners, and the content creators who wants to use AI for their businesses but cannot afford advocacy from the big tech companies. Let's fire up our conversation. I would like to ask my first question. What does LGBTQ plus excellence mean to you?

Andrea:

That means to me diversity, belonging, and acceptance in a way we could never hope a few decades ago. A world that wanted us to see things just in black and white. And now, we have people free to be what they are, fully embracing their existence. Well, relatively, our society has a much area of improvement still. Right?

Andrea:

But I strongly believe that gender orientation or perceived gender is not and should not, under any circumstances, being a limitation to career advancement, personal development, or pursuing a dream. And in fact, I'm happy to be here today because I feel I can finally connect with this audience and be of service somehow, like sharing some insights.

Ahmet:

You have a very strong connections and beliefs in what excellence means within our community. Please tell us about your journey. When did you start in the AI space?

Andrea:

First disclaimer, is that my academic background is, had nothing to do with computer science. In fact, I educated myself in parallel while working. It goes back to when all the buzz about data science has started. I was intrigued. And so I was also looking at, a different career path, right?

Andrea:

So then a series of circumstances, actually some luck allowed me to get from learning about data science and the artificial intelligence to actually working on it. Now, the, the learning curve I had to admit was very steep and the journey often tiresome. But I can say also it's, it's been rewarding. Today, I do what I like, so nothing is better than that. I mean, I end up doing always new things and always different.

Andrea:

And now I've been actively working on the field for 3 years or so, and, well, wow. Much of what you learned today was already outdated yesterday. Like, whatever I learned may not be true anymore. When it comes to progress, it's such as a fast pacing field. My work those days consists into prototyping ideas for our clients into Evident applications.

Andrea:

So most of them have businesses, but no knowledge of things like tools, application frameworks, or whatever. Some just so, let's say, Chargept works and want the same thing for for themselves, but there isn't much, much more than that, like processing images and videos, detecting anomalous behaviors, and, and so on. I could continue forever, but, you get the idea. The thing is, I always wonder if, if the way artificial intelligence is progressing would not go out of hand actually. And, because it progresses so fast, you know, and how it would impact people and communities other than businesses.

Andrea:

AI is costly and, it isn't perfect. It means, not everyone has the means to access it, right? And if, it screws up, who's going to fix things? In those days, those are the problems that interests me the most. How do I make it more accessible, but also safe for who makes it and who uses it?

Andrea:

So there it enters another field of knowledge, like laws, ethics. One never stops learning in this field.

Ahmet:

You're a man after my own heart. Always looking forward to the next steps. So please explain to, like, the what is artificial intelligence to the beginner? How would you explain it?

Andrea:

Yeah. So if we were to put in a few practical words, I will say that, artificial intelligence processes information to do something in our behalf. It's as simple as that. But beware, and this is very important, it processes information, but does not think. And rather, we should call it automated intelligence rather than artificial, right?

Andrea:

The idea of, AI that we have from sci fi literature and movies is one that we call today general intelligence. Machines which can think for themselves and take decisions for themselves and others. But we really miss couple of ingredients to get that recipe, straight. The artificial intelligence we work with today strictly depends on human knowledge and actions, our actions. And this AI can perform calculations, categorize things, can detect objects, notice patterns, invisible to human eyes.

Andrea:

But again, it cannot think and make decisions, based on consciousness, let's say. We feed the human collected data to AI. Now such data carries loads of imperfections because as humans, we are naturally prone to prejudice and bias, which is the biggest social problem right now and and not just there. So many times we, many times the data we gather is gathered wrongly. Most common bias are just related actually to gender and ethnics.

Andrea:

And when it comes to generative AI, it depends a lot on texts and messages we need to feed to it. So now imagine, if the text comes only from a culture that let's say, aborts a gender diversity, what is the result? Right? AI will likely be strictly binary or homophobic or misogyny and so on, but it's not AI's fault. This is why we there are many tasks AI should not perform, like judging criminals or identifying people.

Andrea:

Somebody tried it before raising, but concerns and controversies and big failures.

Ahmet:

In essence, like AI process information to perform tasks for human.

Andrea:

I give you an example. Last year, I was reading this book, Invisible Woman, that suggests that it is more difficult to collect consistent data from women. So that surveys are generally gender biased. You have more observation made from men than women And therefore, generalizations are gender biased. This is the, the, the direct consequence is that when we average facts, those are always leaning towards preferences of male individuals.

Andrea:

So we don't think about that a lot, but it goes as far to us to affect things like human rights, urban planning, corporate benefits, the sign of things, even like the size of the mobile phones and so on. And the problem I'm focusing right now, in fact, is the root cause of AI overlooking diversity. If we, the humans, are biased, then we cannot expect AI to do better in that regard. We know that those topics are even taboo in some cultures. They aren't spoken or they are even outlawed.

Andrea:

This is not a novelty, especially if we go way beyond the male, female well into the LGBTQ cause. That's, that's because until recently, gender and sex were considered like a binary variable. Right? If you look many, many just table where you have a gender column, there's just male female or 1 for male, 0 for female. For a machine, this is a matter for of classification.

Andrea:

A machine doesn't care who you are. It just does its job. Also, that means we don't have enough data for all the gender identifications. So we cannot just hide the gender under the rug altogether, neither. Like if we remove gender from analysis, it doesn't solve the problem.

Andrea:

It is there, just not spoken again. So it's an issue that society must tackle very carefully and believe we should establish at least a sort of standard to allow gender focused data collection that is good enough, like not perfect maybe, but good enough to embrace diversity.

Ahmet:

Human collected data often carries biases. And especially regarding the gender, ethnicity, affecting the AI's output as you elaborated. We start we should establish standards for a gender focused data collection to embrace diversity. My next question, you touched upon a little bit like the gender bias, diversity. If you feed information AI to one side, it will be binary.

Ahmet:

So how crucial is data quality in shaping AI's understanding of marginalized communities, especially LGBTQ plus individuals. How do companies ensure data integrity and diversity?

Andrea:

Extremely crucial, my friend. It, it goes on understanding social and cultural dynamics, coming up with some model or whatnot that ensures people are treated and evaluated fairly. And there you see, it doesn't you just don't need the business analysts and data scientists. You need people working on social sciences and, from, let's say, humanistic fields. So I want to get one thing straight here.

Andrea:

Right? I, I think, it transpires a bit from what I said before is that machines aren't ethically accountable for a decision. AI predicts based on data, but does not understand the meaning of the data or its context. You can be fooled by more human friendly AI like chat GPT, but in truth, artificial intelligence has no awareness whatsoever of what it's telling us. That it me that means that we must know our flows so that we can engineer AI in a way that takes the flow into account.

Andrea:

Will be that easy. We wouldn't be talking about it today. Right? But, two exemplary situations for, that I want to mention where AI also plays a role are 1, the relationship between a company and these employees. And the second is the services a company provides to its clients.

Andrea:

And in both cases, you are gonna have this issue internally and externally. But let's talk about, the company employee relationship. And there is a major issue where most organizations politicize, DEI or diversity, diversity, equity and inclusion policies. You know, doing some bare minimum to be compliant, but not really fostering an inclusive culture that accounts also for, let's say, healing and belonging. Healing means we learn from our mistakes.

Andrea:

We amend them by doing better, not by canceling our mistakes. We shouldn't do that. We should embrace mistakes so that we do not forget where we come from. And belonging because everyone, everyone wants to belong. It is deeper than being included in a group.

Andrea:

I mean, you can you could still be included, let's say, at a party and still feel an outlier there, but you you will you won't feel you belong. So a trend now, like when it comes to work and, like, being hired is to automate HR and doing so must, most of human resources roles are disappearing. So job interviews so far are safe because they need human interaction. Do but do we really want this human interaction being replaced is the question. I think it becomes more of a joint effort between the companies as business entities and also the single individuals to co create something else, which is something like a friend of mine, also my coach, at Clifford works on the United States.

Andrea:

But then, we don't want to overlook the service based relationship neither, right? I think, no one may under no one may understand the situation better than LGBTQ plus data scientists and AI engineers who had to work on biased data. I can imagine them when facing how gender is considered and, or even put below, you know, below the carpet. Like, they, they must think like it's, it's not right. Even worse is that those people are made accountable for success or failure of an AI infused venture.

Andrea:

They also have rules to respect after all. Who is responsible then to AI's mistake? Somebody needs to take the guilt somehow. I think we must have in this circumstance, the centralized organs of control that is like non governmental and agnostic that review the work done by the key players in AI development. There are some organizations doing like that, but they are just starting.

Ahmet:

Yeah. Like, companies definitely need to address these biases through transparent, fair, and ethical AI development. It is a big task, and we have ways to go, definitely. Looking forward, like, how companies planning to develop AI technologies embrace and celebrate the diversity of LGBTQ plus identities?

Andrea:

That's a very challenging question. You see, some efforts have been made, but we still must be up to the challenge of devising technology that actually focuses on that very problem. We cannot just trust integrity self assessment that say my con like, me company X, ethical and compliant. That's very nice to say, but it's not that credible to the public. In fact, I keep reading articles daily about racist and AI, even if we know about that and about the problem.

Andrea:

Same for non inclusive AI. In the past 7 years, we really got many funny cases, which now are intensifying due to AI being finally able to, let's say, speak human language, like JGPT and other generative AI. So it means actually that AI can be straightforwardly offensive, not just misconsider some minorities that actually can negatively affect indirectly. Here we aim for a cultural change. I think, for example, the pride movement is one of those that is changing for the world for better by raising awareness and spreading essentially a message of law.

Andrea:

That's funny because to my eyes, the Pride Movement is what incarnates that principle of doing good while not doing harm, which is also one of the ethical AI principles, which have been voted in the Acilomar conference in 2017, the principle of beneficence and non maleficence. On the other hand, if, you just look at the past months, most be corporates, laid off their AI ethicists whose job is actually focus on this problem and solve it. Obviously, maybe philosophers thinking is, seen as a cost rather than a good investment. I don't know. But, like, as we speak of integrity, right?

Andrea:

I wonder how they plan to progress on it and what's their alternative.

Ahmet:

Yeah. Advocates for trustworthy and bare AI aim for a cultural shift and promoting their inclusivity. How do you ethical guidelines within the companies ensure LGBTQ plus representation is safeguarded in AI applications?

Andrea:

Maybe this will sound a bit simplistic simplistic. I consider what I've said, but I think what we need is a diverse team, both in both in culture, sex, and gender orientation and identification. So that's how I think we will come back, up with a better solution. Like, getting some dedicated ethical principles, creating a framework of data collection and processing. And this is obviously easier when the data we're talking about our tables, like for generative AI, it changes a bit to identify these offensive human behavior and creating countermeasure, for example.

Andrea:

So defining what is harmful and hateful and providing examples and, you know, by all means a context. Like always, always a context.

Ahmet:

Yeah. Diverse teams in terms of culture, gender, and orientation are essential. Like companies should develop dedicated ethical principles and framework for the data collection and processing. How can individuals and communities contribute to shape more LGBTQ plus inclusive AI development?

Andrea:

So the best we can do at the moment, I think, is experimenting. Experimentation. So trying to bring on innovation, even, you know, it's a, it's, as I already say, this has shared responsibility, I believe. All of us needs to contribute to that. We cannot wait for the big key players to make all the work.

Andrea:

We already have the means to do better. Here, I want to invite actually all members of the LGBTQ plus community with a keen interest on the problem, performed of the topic of AI, like to consider applying their principles of living to technology and drive innovation that is fair and unbiased and that respects all diversity. But then I also invite everyone to think always critically about the world around us. To observe ourselves first, to discover our fallacies, and how bias drives our behavior in everyday's life. And to use our little gray cells to overcome that.

Andrea:

It seems probable that human lives will be even more intertwined with machines from now onward. So some aim to even achieve the decisions machines we have been talking about. The time to tackle inclusivity in technology, my opinion is now.

Ahmet:

Exactly. But in a dream, more like, integration of human lives with technology is increasing and pursuit of sentiment machines continues. Therefore, the time to address inclusivity in AI is now. What practical steps or initiatives can one take to become a consultant in AI space?

Andrea:

I suggest looking first into the artificial intelligence ethics, to get a concrete grasp of what the world is currently doing to make AI accessible, safe, and trustworthy, which is, actually a combined effort of academics, technology experts, philosophers from different parts of the world. There you should probably start to read the Silomar conference AI principles, one of which I mentioned before. And then check what big key players are doing and how they're both driving the technological advancements and the ethical conversation. Because there is also that ongoing. But check out for laws on the matter in the country you live in.

Andrea:

Because every country is applying it a bit differently and pay attention, especially to the European Union AI Act as, it ought to be a global standard. So like GDPR, other countries might be compliant with when working with European Union. If there, if, if you are one of those which plans to do that, then that's probably something you must do. This is for the basics, and then I foresee also that one must choose among one of the 3 paths. So the first is, keeping on, on artificial intelligence ethics.

Andrea:

That is what I'm currently doing. This is, a work that is going to be interesting industries, both for business to consumer and business to business services and government entities. So you should stay always in the loop there, even if you don't actively work on it. There is much about research and development in there. And it's also about raising awareness.

Andrea:

Then we have AI governance that focuses on transformation using platforms that control AI and data. So here, what we do is, we build frameworks and solutions around existing and future AI models. How do you help people? By making AI robust, safe, reliable, fair. In other words, trustworthy.

Andrea:

And then there is a third option, My favorite actually is, joining the code of controlling entities, which is, this is for consultants who want to make a difference by enforcing standards and keeping bigger companies accountable. There's a strong focus on inspections, investigation, and audit there. So it may require more tech related skills, but isn't impossible for anyone.

Ahmet:

Yeah, absolutely. Like AI, it takes AI governance and control entities, like focusing on the raising awareness and industry engagement, develop frameworks for robust, safe, reliable, and fair AI, and then enforce standards and hold companies accountable to the inspections and audits, as you mentioned. Can you please share a moment with us from your journey?

Andrea:

A big realization came up when when I realized that if AI fail us, people are going to look for a culprit. So you can't, in in dict and imprison an AI. It seems obvious while I say so. Right? But, because now we have stuff like JetGPT, Copilot that keep a conversation with us, We kind of personified it.

Andrea:

Some personified a lot. And now it is easier for now to person if I these forms of AI. You might even have heard of people who fell in love with chatbots. Right? Well, I realized that this was going to happen and we that we may be misled to be focused on symptoms and carry on the wrong buttons.

Andrea:

What I did there, I started reading more and writing more. And the more I did that, the more I became aware that, of where we are right now and of how much work there is to do still.

Ahmet:

We eagerly anticipate learning more from you in the future for sure. So in the summary, we talked about self education and transition, your professional experience and your journey into the AI space. Your work involves turning client ideas into practical applications, often educating them about the broader capabilities of AI. Beyond popular tools like ChatGPT, you share your concern about the ethical implications, accessibility issues, and potential misuse of AI leading to an interest in the intersection of AI with ethics and law. Your emphasis on AI inherits human biases, particularly in gender and cultural aspect and advocates for more inclusive data practices and diverse development teams to address these issues.

Ahmet:

If you could summarize the essence of our conversation in one word, what word would you choose to resonate with our audience today?

Andrea:

I guess that's gonna be awareness.

Ahmet:

Wow. Yeah. Couldn't agree more. Thank you very much for joining, Andrea. In this episode, Andrea shares ethical AI, pioneering diversity and inclusion in the digital age, valuable insights.