The Try Tank Podcast is about innovation and the church
Welcome to day four of Ah, Faithful Futures Conference
>> Speaker A: I'm waiting for the word. All right.
>> Father Lorenz Lebrija: Good morning, everyone, and good morning online. Welcome to day number four of Ah, Faithful Futures Conference. I'm the Reverend Canon Dr. Lorenzo Labrija. Just Lorenzo. Um, and it is my pleasure this morning to introduce our final speaker. As you'll recall, in the setup of this conference, one of the things we did is we followed Osmer's practical theology format. And we began by asking the question, what is going on here?
>> Speaker A: Right.
>> Father Lorenz Lebrija: And we began with Dr. Butler that was telling us, who are we trying to become in this.
>> Speaker A: What is.
>> Father Lorenz Lebrija: What is this new age sort of bringing? Then came the why is this happening? Question. And that's when we got to see a little bit behind the scenes from someone that has been a corporate person to see how this all works. Then came the question of what should be going on here? And that's about our imagination. And how was Jane Day, as we call it, yesterday? Wasn't that something? The way that we got to just. Yeah. One of the things I. I really think is cool about that, as she said from that exercise, was that, uh, one of the things that it does move on the scale is that it allows you to think that you can do something about this, whereas you begin like, oh, there's nothing that we can really do about this issue. Once you go through these exercises, you realize that you do have an opportunity to do something about what is going on. And that leads us to today. The conference has been like this uplifting, happy place up until now about the thriving of AI. But we also need to be rooted in our theology. We need to be rooted in the prophetic voice of God that makes sure that no one gets left behind. So that is why the setup of this conference finishes with our next speaker.
Miguel De La Torre is a leading voice in contemporary Christian ethics
Dr. Miguel De La Torre is one of the leading voices in contemporary Christian ethics. He brings a perspective rooted in liberation theology and lived experience. Born in Cuba and arriving in the United States as a refugee in. He has devoted his life to speaking truth about power, justice, and the margins of society. He serves as a professor of social ethics and Latinx studies at the, uh. Is it Lif. Ilif. Sorry. Thank you. Ilif School of Theology in Denver, where his teaching and scholarship challenged both the church and the academy to confront issues of race, immigration, gender, and poverty. He has written or edited. Well, as of later this year, it'll be 50 books. He has three books this year alone. I was telling him, and I can barely write an email. You know, it's just not fair. Lord, he has. Some of those books are award Winning works on ethics, politics, theology, and his commentaries are sought after in both academic and public debates. We actually did a quick AI ah, search this morning and he is on the top 10 list of the most cited authors of ethics from academic journals in the world. Beyond his scholarship, Dr. De la Torre is, ah, a public intellectual, an activist who works with communities of faith and with leaders to wrestle with ethical demands for our moment. He is especially committed to censoring the voices of those most often excluded from theological conversations. So at a time when artificial intelligence is reshaping our society and our church, his insights into liberative ethics could, could not be more urgent than they are now. He reminds us that questions that we ask of technology are never neutral questions. They are moral questions about who benefits, who is harmed and whose humanity is affirmed or denied.
The need for ethics to provide guidance in an AI age has become paramount
Please join me in welcoming Dr. Miguel De La Torre
>> Speaker A: Buenos dias. Usually when I come to these type of conference to give a talk, I come, I give my talk, I leave, and that's it. Um, but this conference was different. It was one in where I really learned a lot. Um, I came knowing less, I leave knowing more. And I honestly could say that seldom happens. So I want to thank you all. I want to thank the UM organizers for inviting me. Um, and I do have one little caveat. Um, when I finish at 11:30, I am going to literally run out of the room and jump on an Uber to catch a plane. It asks. It's not you, you know, it's me. Um, anyone who knows me can testify that I am among the most technically inept scholars within the academy. Hence my surprise, if not shock and bewilderment when asked to keynote this conference. At first I declined the invitation. AI is not my area of expertise. Nonetheless, the planners were insistent I was being invited not because I understood AI, but because I understand ethics, specifically liberation as ethics. Maybe the concerns publicly held toward the advances in AI is because historically, programmers and creators were more concerned with the next step of development rather than the philosophical and ethical ramifications co concerning the relationship AI would have with humans. Maybe these conversations should, uh, have occurred early, back in 1943, when researchers created the first modern computer. The need for ethics to provide guidance in an AI age has become paramount. But I would argue not just any type of ethics, and specifically not ethics rooted in some Eurocentric worldview which seldom examined its complicity with white supremacy and neoliberalism. I insist an ethics grounded on the worldview of the marginalized. A, uh, praxis oriented ethics in solidarity with the dispossessed and the disenfranchised. In other words, a uh, liberative ethics. Microsoft AI CEO Mustafa Suleiman reminds us that quote, AI is both valuable and dangerous precisely because it's an extension of our best and worst self. No doubt AI will unleash unthinkable cures for some of the most stubborn diseases that have plagued humanity. But will these life saving advances only benefit those capable of paying for said cures? AI can be harnessed for good, helping to provide a blueprint to global threats like climate degradation or nuclear proliferation. But would such cures threaten certain corporate cash flows like the fossil industry, and thus be shelved? Because a primary incongruency exists between a profit centered and a humanity centered AI, we face a future where an AI not guided by ethical considerations can cause great harm and usher in an era of instability. Consider the warning of Nobel laureate uh, Geoffrey Hinton, who referred to as the godfather of AI for his role in his creation. Since 1923, he has been sounding the alarm that technology he helped create has a 10 to 20% chance of extinguishing humanity because tech companies are failing to ensure humans remain dominant over a submissive AI. Now, when I think of AI Breaking Bad, images portrayed on the silver screen come to mind. AI rebelling against humanity is a common Hollywood trope. Think of HAL 9000 in the 1968 classic 201 A Space Odyssey. Or the military AI Skynet of the Terminator franchise which wars against humanity. Think of Robocop movies where corporate AI directives override morality and ethics. IRobot, which determines that for humanity's own sake they should be controlled. Or the Most recent movie M M3 Gan or Megan, a child companion doll whose protective instincts lead to murderous acts. Also, I'm reminded of the HBO series Westworld where theme park robots gain self awareness and rebel against their human oppressors. All these movies and shows share a common theme, AI attaining superintelligence. But for AI to develop independent thought self awareness, it indicates a possession of at least six foundational dimensions. Aspiration, intuition, emotion, thoughts, sensation and behavior. It is believed that sometime after artificial general intelligence is developed, superintelligence will follow. This superintelligence can be achieved by AI consistently designing an improved version of itself. Or via transhuman approach where biological brain is either emulated or uploaded, merging human and machine intelligence. Can AI achieve human like cognizant capacities? Can machines become conscious? Can they possess moral agency? We are told we are just years away from achieving this. Computer programs thus far can only produce pattern recognition output based on input in the form of sequence of instructions. Lacking intentionality, they cannot supersede the information available that was inputted. Algorithms only understand the objectives and instructions program lacking the ability to doubt or to be constrained by morality. Thus, because I remain skeptical that superintelligence can be achieved, I am not one of these AI doomers. Although contemplating the ethics of superintelligent AI may prove entertaining even if I am wrong and it is eventually created, such deliberations detract from my immediate concern, which is AI as it exists in the here and now and what the present. Ethical responses arising from the margins when analyzing the personal, the national and the international ramifications of AI.
The development of artificial intelligence is raising theological questions about Christianity
Before turning my attention to the ethical implications of AI and its impact on the individual, the national and the international community, we should consider the theological questions AI is raising. Specifically, what does it mean to be human in an era where we can become gods? And more importantly, is it wise to become gods? The omniscient possibility of AI is raising existential questions among AI developers. Churches in the Silicon Valley, a reporting swelling congregation with tech workers prompt by the questions raised by the development and use of AI, who are expressing curiosity about Christianity. Consider Peter Thiel, co founder of PayPal and Palantir, who is conducting this month a sold out series of lectures on the biblical Antichrist. The series is being organized by Acknowledging Christ in Technology and Society, an organization founded by Michelle Stevens, the wife of Trey Stevens, who is the co founder of Andorwil Industry which makes and sell uh, autonomous weapons systems to the military. Now my concern is not a turn to Jesus is occurring among these tech bros. But. But who's Jesus? Is this the Jesus who stands in solidarity with the disenfranchised, preaching liberations called for by the Gospel or the emerging Jesus of white Christian nationalism? I doubt that my Jesus is Peter Thiel's Jesus. Thiel, after all, donated millions to Trump and Vance campaigns, co authored the Diversity Myth, where he complains of the shift at Stanford from Western values to multiculturalism and the dumbing down by the admission office to attract adversity. Troubling is his assertions that the definition of rape have been expanded to include a seduction that is later regretted, quote unquote on the individual basis. Personalized AI through recommended algorithms creates a paradox which radically empowers individual agencies while simultaneously reinforces institutional oppression. When we consider that most AI developers and data science teams are uh, comprised of white men from western countries ages 20 to 40, we should not be surprised that AI as we know it is created in the image of Eurocentric white men. Thus, AI cannot be value neutral because it mirrors the habitats and the values of its developers. While AI Eurocentrism does not represent the entire population, it nonetheless is used to determine decisions affecting the entire population. The unexamined biases arises throughout all stages of AI, UH development, programming, testing and application. Bias in bias out as data is harvest to shape choices. Autonomy is compromised by predictive modeling that reflects existing social biases manifested as racism and sexism. An example of AI racism is the COMPASS algorithm that is employed by judges, probation and parole officers to assess criminal uh uh, defendants. Recidivism's likelihood research conducted by ProPublica of 10,000 defendants in Florida's Broward county found a disproportionate prediction that black defendants were twice as likely to be high risk reoffenders when actually they were not, while white defendants were low risk even though they reoffended at higher numbers. Such risk assessment twos reproduce and reinforce the social biases and discriminatory practices already existing within the judicial system, thus legitimizing and legalizing racism. Additionally, the schoolhouse to prison house pipeline is being technologically advanced by the adoption of AI tools including facial recognition and predictive analysis to flag so called high risk K to 12 students where high risk continues to be coded language for students of color. The funds used to implement this AI surveillance came from the federal funds meant to support students during COVID 19 pandemic. Before they even act, students of color are already being watched. AI sexism is reinforced because the toxic masculinity is profitable, proving lethal to women, specifically younger women. For example, AI nudify online websites 85 uh as of this date allow for deep fake creations of non consensual explicit images of women and girls, including child sexual abuse material. Specifically, the user can upload a photo of their victim and have AI relying on the tech services of Google, Amazon and Cloudflare to generate a uh, nude rendition of that person. Such deep fakes can then be utilized to blackmail or cyberbully women and girls, resulting in unmeasurable harm if released. Or the deepfake videos can be monetized at adult entertainment websites like pornhub. Once on the web, they become difficult, if not impossible to scrub. Over a six month period this year, some 18.5 million individuals utilize such sites, generating $18 million. Now, liberative ethics is a community base whose imperative is solidarity. And yet AI reinforces the prevalent Eurocentric characteristic of radical hyper individualism. Consider the ability to create an AI powered likeness of a recent departed loved one. As is being done and marketed by startup companies like Deep Brain AI and the Hereafter AI where afterlife ceases to be the domain of religious believers. These digital afterlife industries foster a uh, techno spiritualism fueled by realistic holograms or chatbots programmed with the departed memories, thus providing an ability to communicate with the dead. This neospiritualism, like the 19th and early 20th century pseudoscientific attempts to communicate with the dead through seances, uh, are also a profit generating venture preying on those who are grieving. How soon in our capitalist culture before AI avatars start making suggestions for sponsors like ordering the deceased's favorite food via special delivery on their birthday or anniversary? Of course, the damage can be more than just economical. By delaying or postponing the processing of grief, greater psychological damage can occur. Studies indicate that the avoidance can lead to the creation of one's own reality, which reinforces isolation. By contrast, a liberative ethic calls upon the community to be presente and provide the necessary care and love for the surviving family members to work through their grief and find some healing among others comprised of flesh and blood. Furthermore, these hyper realistic videos can easily be created of people saying and doing things which did not occur.
Deep fake avatars can undermine trust in human interactions with AI
Another ethical consideration must be considered the misuse, um, or use of the departed to advocate positions without their consent. On August 4th of this year for former CNN correspondent Jim Acosta, interview AI avatar Joaquin Oliver, one of the 17 victims of the 2018 Parkland school shooting. This deep fake resurrection allow Oliver, uh, to advocate for gun safety legislation. Such deepfakes of the departed steals from their right to be forgotten. Uh, a prevailing problem with social media, as anyone who ever tried to delete their Facebook account knows. But one need not be dead to have deep fake avatars misrepresent them. Take the example of Congresswoman Alexandria Ocasio Cortez, who took to the House floor to castigate Sydney Sweeney's American Eagle, Jean Advertisement as racist. Chris Komal, formerly of CNN and presently host of a cable news network, News Nation, shared the video on his Instagram. The video, however, was a deep fake creation. Besides creating political confusion, these deep fakes can create economic opportunities for bad actors. In May of 2023, an AI generated video showing smoke and flames near the Pentagon spread on Twitter as verifiable breaking news. In a few minutes it took to post, there was no explosions. The S&P 500 lost and recovered an estimated $500 billion. Some traders using AI algorithm trading systems profited from this deep fake at the expense of of others. Now the usage of such deep fake avatars can be understood as pro social deep fakes as opposed to those designed to do harm malicious deep fakes. Still, who gets to determine what is pro social and what is malicious? What the Trump administration officials might consider pro social deepfakes as I probably will define as malicious and no doubt vice versa. And even if such deep fakes are used, pro social, the hyper realistic manifestation, can harm societal trust and diminish the value of digital material. When one thing is fake, an argument can be made that it is all fake news. If one is actually caught on video committing a crime or making an outrageous statement, it can simply be dismissed as deep fake news. Any trust for human AI interaction must rely on transparency and interpretability. Understanding how AI arrives at decisions ensures individuals comprehend the decision made about them and thus can challenge discriminatory and or uh, fake outcomes. From a liberation ethical standpoint, explainability is justice. The oppressed must have access to the reasoning that affects their life. Emphasis must be placed on procedural ethics. Who decides and how, not just the outcomes. This is a dynamic ethics which is rooted in dignity and human agency. So when nearly 4,500 ChatGPT conversations privately shared with friends, family and colleagues containing personal and sensitive information appeared on Google search results, trust was eroded as privacy became an illusion. To develop trust in AI, precautions must be taken from abusing individuals through algorithms designed to maximize efficiency and profitability. For example Delta airline usage of AI to dynamically set prices based on a traveler's financial data rather than market forces. Or hers using AI scanners to charge customers without any employee input for microscopic damages to vehicles. Or Marriott. Uh, using AI to determine which loyalty member gets an upgrade. Or credit firms using AI to deny loan application or or charge higher interest rate to non whites. Problematic when you consider that people of color with similar financial characteristic as white people are more likely to be rejected for financial assistance. Specifically 80% of your black, 70% of your Native American and 40% if you're Latino. Of course, the usage of AI to exploit individual is not, uh, limited to corporate America. Consider the case of Airbnb Manhattan host who submitted an alleged 16,000 damage claim citing a cracked coffee table, a uh, urine stained mattress and broken appliances with photos that would later identify as AI generated. Even the government is employing AI to the detriment of its citizens. Medicare is Planning to use AI starting in January of 2026 to decide whether patients receive coverage. Delays and denials of coverage is incentivized because payments to the contracted Companies employing this AI is based on reducing costs. Laws and regulations are required to safeguard from such exploitations by bad actors. Drive to succeed in the neoliberal economic system where short term gains remain paramount over long term good towards the national. Now, when automobiles replaced the horse and buggy in the early 20th century, entire industries disappeared. Blacksmith, horse staple, whip makers, uh, carriage manufacturers. You couldn't even give away horses for free. Now, while certain jobs disappear, newer jobs in the automobile industry were created, contributing to the strengthening of the middle class. But Unlike the early 20th century, the rise of AI in the early 21st century is eliminating jobs without creating sufficient new ones. In 2025, the transformation of the labor market by AI is unfolding while most of the population remains unaware and unprepared. By the mid-2025, um, unemployment surged to 5.8% as traditionally first rung entry level positions began to vanish. CEO Dario uh um, Dario Amode uh of Anthropic, the world's most powerful creators of AI, predicts that by 2030, half of all entry level white collar jobs could be replaced by AI, causing a spike of unemployment rates between 10 to 20% over the next one to five years. In 2024, Big Tech reduced the hiring of new graduates by 25% compared to 2023, despite a full market recovery. The loss of entry level job, it's not simply a lost opportunity. It represents the loss of the corporate career ladder that past generations have used to climb, wrecking careers before they ever even begin. Entry level jobs are not the only ones threatened. In the first six months of 2025, AI eliminated 77,999 tech jobs, or about 491 persons a day. Ford CEO Jim Farley places the short term loss of white collar jobs at 50%. A AH study by the World Economic Forum on the Future of Jobs noted that within the next five years, 41% of employers worldwide intend to reduce their workforce due to AI. The disappearance of a white collar job means the disappearance of a middle class, which for decades now have been in an economically downward spiral as the nation's wealth continues to expand. In early 1960, the average CEO made 20 times the salary of the average worker, 44 times by 1975. By 2020 the difference was 670 times, with some companies like Amazon CEO being paid 6,474 times the average worker. This disparity was due to multiple causes including moving manufacturing jobs to countries paying substantially less in wages and the elimination of jobs through consolidation to increase corporate profits. Because The CEO wages a uh tie to decreasing workers wages. An incentive exists to eliminate jobs. In the past, those CEOs able to announce layoffs of 1,000 or more workers earn higher compensations than those who did not announce layoffs for cutting expenses. AI contribution to the US wealth gap through the elimination of job is an unethical use because of the fundamental assumption within liberative ethical systems from the margins of society which emphasizes human dignity and human flourishing. The problem with our neoliberal worldview is that AI is being developed by those driven by enormous egos and extreme earnings, not ethics. Irrespective of the number of jobs lost due to AI and how soon they may occur, one business enticement which has existed before the introduction of AI will continue to be true. When labor costs viewed as an expense are reduced, stock values and CEO salaries rise. Replacing human has always been incentivized within our current neoliberal based economy.
The coming tsunami of unemployment will ravage white men which will negatively impact society
The coming tsunami of unemployment will ravage U.S. communities, creating a crisis specifically among white men which will negatively uh, impact society. Because of the prevailing sexism within US society. Men, predominantly white men represent 70% of law firm partners, 72% of Congress, 86% of tech founders, 90% of Fortune 500 CEOs and 100% of US presidents. Even men with or without college degree out earn women with the same educational attainment. Due to institutional racism. Those who are most likely unemployed are predominantly people of color. But what happened when those accustomed to white affirmative action the privilege of employment start being laid off? Studies show that for these men, employment is a stabilizing force. One British study showed that quote the strongest predictor of a positive mindset is men by far is secure and satisfying employment. End quote. Today, 1 in 8 men between 25 and 54 is not working over represented by those who struggle with addiction or wind up in prison. Those men likely to overdose or commit suicide. Not surprisingly, the havoc created by unemployment within white working class communities will only increase. Especially when we note that those men more likely to vote for authoritarian populist leaders promising to solve their dilemma are uh, these same men seeking to regain their lost standing within society. Let's be clear, the fault of this coming unemployment tidal wave is not due to AI but the short term neoliberal capitalism which prioritizing maximizing profit over against creating long term benefits for all of humanity. Rather than augmenting work to reduce employees workload and or eliminate mundane tasks, it is more profitable for AI to move towards automation and replace the laborer. Take the warning uh, uttered by Former Google executive Mo uh Godot, who predicts that a 15 year span of eliminating all white collar workers will start in 2027, ushering what he calls, quote, a short term dystopia. End quote. Using his own startup company, Emma Love, for example, he said that in the past it would have required hiring 350 developers, but now the entire operation is run by just three individuals. One who is soon to be replaced by AI. Among the many negative consequences an increasing wealth gap which concentrate wealth in the hands of the few. The 0.1% owning 14% of the nation's wealth is the imbalance caused societally. Especially when we consider the bottom 65% who own just 2.4% of the nation's wealth, or about $4 trillion. One should not have been surprised at seeing tech CEO billionaires belonging to this 0.1% occupying the most exclusive seats during Trump's second inauguration or having dinner with him last night. They include CEO um, Elon Musk or he wasn't there for dinner of Tesla. Wonder why Jeff Brazos of Amazon or Mark Zuckerberg of Meta among the richest men in the world whose combined worth is a trillion dollars. When I remember I said that 65% of the population has $4 trillion, these three men have just 1 trillion among themselves. Democracy is threatened when digital power is concentrated among a few tech bros where bribes masquerading as political action committee donations unashamedly buy political influence in seeking profit centered rather than humanity centered AI. Although regulation is necessary, the solution is not. I'm sorry. Although regulation is not necessarily the solution, it remains one of the tools that can be employed to create a more just AI ecosystem. Nonetheless, the current Trump administration, during the first days of the second term signed an executive order of removing barriers to American leadership and artificial intelligence. The administration argues that the current regulations on AI are onerous and burdensome. They argue that such technology is far too important at this early stage of development. Basically, the White House plan indicates that the fellow federal government, quote, should not allow AI related federal funding to be directed towards uh, states with burdensome AI regulation that raised these funds. And Republicans are not the only ones guilty of giving AI free reign. In my state of Colorado, Democrats proposed a bill in the State House that will shield AI companies from lawsuits making it illegal for individual harm by AI to to hold these companies accountable for unfair business practices. Dodge servers demonstrate the dangers of unregulated AI. Thanks to a program created by Elon Musk, AI has employed to slash 200,000 federal regulation with the goal of eliminating half or more than another 100,000 Washington's regulatory mandates by the first anniversaries of Trump inauguration. Maximizing profit through job elimination coupled with deregulation is a formula for political unrest. Although I am no Marxist, contrary to what some might believe, I do think old Carl had something back in 1867 when he correctly predicted the internal contradiction of capitalism which will cause its demise. Simply stated, laborer wages are an expense for CEOs, so a constant motivation exists to keep them low or ideally eliminate said expenses to maximum profit to maximize profit. But wages are the laborer's main source of purchasing power needed to acquire the goods and services they produce. If there are no jobs, there are no wages and hence there is an overproduction which floods the market. And because unemployed workers can't afford to buy the products, the market crashes. This contradiction leads to systemic crisis within capitalism leading to its demise. I would argue that if uh, this AI unemployment dystopia does unfold over the next five years, it might single the early death rattles of our economic system. And regardless of how corrupt and oppressive it might be, I fear the violence entangled in its wake. During economic strife, authoritarian political system mass as populism are uh, preferred to protect plutocrats. Power, profits and privileges. Power is obtained through conspiracy theories and disinformation and it is maintained through surveillance. We are at the crest of a uh, disinformation apocalypse. When disinformation and the spread of conspiracy theories, that is the election was rigged or Obama is a Kenyan to advance hate speech against transgender, undocumented immigrants or members of the opposing party, elections can be won. Consider the 2016 US presidential election when Cambridge Analytica used Facebook Data of some 50 million users without their consent for m manipulative micro target political ads designed to influence voting behavior. Fortunately, attempts at uh, election interference in 2016 and 2020 were low tech, relying on genetic bots messaging containing low quality content. Thus it had a minor impact. But since then AI had been fine tuned. As strategies continue to be developed by bad actors on a massive scale. The disinformation of 2016 and 2020 promoted by political bots designed to shock, is being replaced by human like bots psychologically targeting individuals in a slow, subtle and coercive manipulation of online conversation that has slipped into everyday digital discussions. According to Goldstein and Benson, two uh, Vanderbilt professors specializing on national security, the Chinese company Go Laxy is believed to have already carried out such operation in Hong Kong and Taiwan and no doubt are uh, preparing to expand to the United States. Once this information is generated and spread on AI to win election, power is sustained and maintained by sophisticated surveillance systems which have been preferred by other nations, most successfully by China. The prevalence of closed circuit television cameras coupled with facial recognition software and social media monitoring creates a massive intrusion into the private lives, which enable authorities to keep tabs on dissidents. AI has the ability to search, capture and harness data at an astonishing scale within microseconds, with extraordinary precision, providing government agencies the ability to react to perceive threats in real time. Such Orwellian constant surveillance threatens the right to disagree with the ruling authority. U.S. agencies like the Department of Homeland Security have already confirmed employing AI to analyze social media posts to target what they label, quote, terrorist sympathizers and so called extreme rhetoric or antisemitic activities of those applying for visas and green cards. Although denied by the White House, sources at federal agencies like the Environmental Protection Agency, the Department of Veterans affairs, and the Department of Housing and Urban Development claim AI is, uh, monitoring workers searching for language considered hostile or contrary to Trump. And even if these sources are wrong, still the capacity for workplace surveillance already employed by US Companies exists. When we consider how federal government is currently being weaponized to intimidate or neuter potential opposition to the president, concerns exist on how AI has been utilized to target those challenging the party line. Of course, a national privacy bill can help mitigate these AI evasive practices. But such a bill passing Congress as it is currently embodied is at best a long shot. If democracy is to thrive, then divergent voices is not a threat to be extinguished. Instead, what is required of constitutional democracy is regulation of AI to uphold civil and human rights. During his keynote address at the 2025 Black Hat Cybersecurity Conference, Ron De Burt, Director of Digital Rights and Research Group, argued that the cyber community can defend against us quote, dramatic descent into authoritarian, as opposed to what he sees occurring, a descent into a kind of fusion of tech and fascism, end quote. Deliberative ethical theme of solidarity, over and against isolated profit or convenience must be prioritized within any type of AI ethics. As an ethical imperative, solidarity democratizes access to power, demanding AI systems share benefits equitably while distributing burdens, ensuring disenfranchised communities aren't left behind. This radical solidarity called for by liberationist ethics is incongruent with an AI that confines identity through algorithm pigeonholing. Although Miguel Luan is a data scientist and not a liberation ethicist, he nonetheless observes that solidarity as an ethical principle is mostly absent in the development of AI Incorporating solidarity as An AI principle means sharing in the prosperity generated by AI through a redistribution. Uh, I'm sorry, redistributing the augmentation of productivity while ensuring AI does not contribute to to inequalities. Additionally, long term implication of developing and developing AI must first be assessed to ensure no group of humans become irrelevant. These ethical principles resonate with what uh, Australian moral philosopher John Tsiolos calls a humanistic ethic which stresses a commitment to a plurality of values, the importance of procedures where emphasis is not on the outcome yield and certainly accorded to individuals with collective participation. In defining human flourishing, I would argue that such an approach must supersede our current pervasive neoliberal economic model of optimizing profit and efficiency. In the final analysis, I argue solidarity transcends the individualistic human centric approaches to ethics, moving the discourse towards a humanity centered ethics. Technology should not entrench systemic inequality, but rather be utilized to reduce disenfranchement. For AI to be liberative, communities must be meaningfully included in AI governance. Community votes must be included in design process, especially those historically marginalized. This echoes UNESCO and other global call for transformative technology that serves human goals and not the other way around. Through collective participation and ethical evaluation of AI as opposed to top down mandates.
Race to achieve AI global dominance has created new geopolitical divide
The last one international A race to achieve AI global dominance is underway. Whichever nation can construct the largest AI ecosystem will be able to dictate global AI standards and and reap broad economic and military benefits. This race for the global AI dominance has created new geopolitical divide. East versus west, global south versus former colonizers have become obsolete. The new divide is digital between those nations with computing power to build cutting edge AI system and those without. We are currently witnessing the reordering of the global sphere of influence. Where once global powers like Russia are being left out of the equation due to their lack of multi billion dollar AI data systems. As of this writing, only 33 countries have such advanced facilities. 150 countries, um, which includes Russia have none. Jargon for AI global leadership, uh, China with 22 advanced facilities, United States with 26. The US and China combined operate 90% of all advanced data centers. Concentrating AI power in two ideologically opposed nations. Nations lacking data centers are experiencing a uh, brain, uh, drain to these two centers making them beholden to AI power structures. Let me wrap this up real fast because I do want to say one more thing before 11 o'. Clock. The fear then is that you could have well before you had rogue nations like Korea, North Korea. That could be a danger now instead, any group, even a group within a failed state, can use AI to create, um, weapons, whether they be biological or whether they be digital. Um, um, so no longer do we have the same economic, I mean the same political global system. In fact, Henry Kissinger, not that I like to quote him a lot, said that one of the greatest dangers facing our globe, you know, the global future, is the fact that, um, failed states can use AI for the purpose of creating havoc. And it has, doesn't have to be bad actors, even good actors wanting to do good can, may not count the unintended consequences of, of the use of AI.
As an ethicist, what is the ethical act when it is hopeless
So I have 10 minutes left and I had this beautiful conclusion which I tore up after being with you all for a whole week. Um, how can I say this? So, um, after everything I've said, it seems hopeless, doesn't it? The situation is hopeless. Now I realize when I say that I remember how many fingers went up yesterday when we were measuring how many people had, um, realistic hope for the future. I did not raise a single finger. I am hopeless. In fact, I wrote a book called Embracing Hopelessness. That's how hopeless I am. And after doing the research for this paper, I am even more hopeless. And I say that because hope is truly a, uh, middle class privilege. In other words, as long as I have a bank account, I could hope for the future. But if I live in Gaza, in Gaza today, or if I'm in prison in Alligator, um, aquitas, there is no hope. And to go in and say all things work for good, those who are called according to God's purposes becomes trite, insensitive and quite honestly, oppressive. You see, we have developed a system in where I have to go to the police department to get a permit from the police department to protest the police department for police brutality. I have domesticated praxis and action so I can, you know, make a sign of protest. I could go on a march. By the way, this is the only country in the world that you could drive to a march. Think of that privilege. I could go to the march, take a picture of myself, post it on Facebook and say, look how active I am in the cause. Knowing that nothing changes because the structures are designed to allow me a place to voice my opposition. As long as this does not affect anything important. When we stand before the massiveness of neoliberalism and the AI that's developing the, to keep it in power, any resistance I could dream up of, they've already thought of and are 10 steps ahead of me. It is hopeless. We Are not going to change the future. So as an ethicist, my question becomes, what is the ethical act that I must engage in when it is hopeless? Hope domesticates. When I was in Auschwitz, there's a sign that said, work will set you free over the gate. That was hope. People ended up in incinerators anyway. But you see, if I have hope, if I keep my head down, if I don't make waves, if I don't bother the system, maybe I'll survive. And this is why hope domesticates. As long as we could give people hope, they won't rebel. But when I have nothing to lose, that's when I become the most dangerous. And when I have nothing to lose, I realize there is no hope. So I might as well do something. I strongly believe that most social movements are, uh, because the people realize they're already walking dead people and they have nothing to lose. So they put their bodies on the line. So as an ethicist, how do I get people to become hopeless? I go back to what our ancestors have done. I'm not inventing anything new. The only thing I'm inventing is the language I'm using. I call this an ethics parajoren. Now, for those of you who don't know Spanish and have yet, not yet learned the language of the angels, Allow me to translate. Well, no, I'm not going to translate because the word actually is the same as a certain four letter word that begins with f and ends with k. And I only curse in Spanish, not in English. What I mean by a nexus parajorel is that when everything is stacked against you, the only ethical imperative is to screw with the system, to subvert the system, to play the trickster. And the playing of the trickster is what our, uh, people have always done when they've been oppressed. The black community has bear rabbit and bear bear. Mexicanos have cantinfra. Puerto Ricans have Juan bobo. We Cubans have um, um, um, Pepito. And the native American community has coyote and spider. In other words, communities that have been oppressed have turned to their trickster images as understanding them as ways of being able to subvert the system. I am a child of elegua. For those who know anything about the Yuban tradition is the trickster. So I am the trickster. And from that I am pulling into this Christianity. I claim my cultural roots. You know, my. I'm, uh, sorry. My cultural roots are not Greek. They're Yorubin, they're medieval Catholicism. It is from there that I am developing this ethics Parahuden. So within this AI hopelessness that we are facing, the questions that I am trying to think of is how do I engage in practices that undermine it, that subverts it, that challenges not because I'm going to win again, I'm hopeless. I'm not going to win. But I might be able to get us to something that's just a little more just. Now, usually somebody says, well, wait a minute, you have to give them hope because if they don't have hope, they won't do anything. And I would then respond, that's your middle class privilege speaking, because if I have a good bank account, I don't have to do anything. You're absolutely right. But my parents had nothing and they still had to fight for justice because they had no choice. You see, hopelessness is a Hope is a privilege that allows you not to have to do anything but rely on God and God's graces to take care of everything. What I'm arguing is, in my struggle for justice, I don't struggle because I think I'm going to win. I'm not. I don't struggle because I'm going to get an extra ruby in my crown when I get to heaven. I struggle because in the struggle, I define the faith I claim to have, and more importantly, I define my very humanity. I struggle because I really have no other choice because of who I am and the community that I belong to. So in dealing with AI, neoliberalism, racism, classism, sexism, I have to figure out how do I ethically lie so I could discover what is true? How do I ethically steal so I could feed the hungry? How do I ethically disrupt so I could create a level playing field? How do I ethically joke and play the trickster so that I could create a new reality? This is moving beyond good and evil, beyond those Eurocentric neat dichotomies of what is right and wrong. It is in the ambiguity in which most of us are forced to live. When the choice is not between, you know, something, uh, good, something right and wrong, the choice is between something wrong and something worse. When that is our choices, then how do we rebel against the system? And my argument is by Joriendo. Um, thank you. Him. Thank you so much.