UNOR+HODOX cover art

UNOR+HODOX

UNOR+HODOX

Written by: Sadiq Dabale
Listen for free

About this listen

UNOR+HODOX – A Podcast on Ethics in the Age of AI

Welcome to UNOR+HODOX, a thought-provoking podcast exploring ethics through a Christian lens while tackling the moral dilemmas of artificial intelligence, technology, and modern society.

What We Discuss

Rooted in Christian moral philosophy, UNOR+HODOX challenges conventional perspectives on ethics, justice, and the human condition in an era dominated by AI-driven decisions. Each episode delves into questions such as:

• Can AI align with Christian principles of justice, mercy, and truth?

• Does artificial intelligence pose a threat to free will, dignity, and human purpose?

• How should faith communities respond to the ethical implications of automation, surveillance, and deepfakes?

• Is AI a tool for good or a moral gray area that challenges our understanding of divine wisdom?

Why Listen?

UNOR+HODOX is where faith meets the future—engaging theologians, ethicists, technologists, and philosophers in deep discussions on how Christian ethics can guide AI development, governance, and usage. Whether you’re a believer, skeptic, or just curious about the intersection of faith, technology, and morality, this podcast invites you to wrestle with the toughest ethical questions of our time.

Join us as we navigate the uncharted ethical landscapes of the AI age, questioning what it means to remain human, moral, and faithful in an era of machines.

📢 New Episodes Weekly | 🎙 Hosted by Sadiq Dabale

Poddy Mouth / OZORO
Episodes
  • When drones Autonomously kill
    Mar 3 2025

    Unorthodox Podcast – Episode Notes

    🎙 Episode Title:When Drones Autonomously Kill

    🔹 Episode Overview:

    In this episode of Unorthodox, we dive into one of the most controversial and urgent ethical debates of our time: the role of Artificial Intelligence in modern warfare. AI is already shaping our daily lives—from social media algorithms to self-driving cars—but what happens when it starts making life-or-death decisions on the battlefield?

    We explore how AI is revolutionising military strategies, the rise of autonomous weapons, and the chilling reality of AI-powered drones that can kill without human approval. With real-world case studies, including the 2020 Libya drone incident, we dissect the ethical, legal, and security challenges that AI warfare presents.

    🔹 Key Topics Covered:

    ✅ The rise of AI-driven military technology: from autonomous drones to cyber warfare

    ✅ The real-life case study of the Kargu-2 drone attack in Libya—did AI make a kill decision on its own?

    ✅ The ethical dilemmas of AI in warfare: Who’s responsible when AI makes a mistake? Should machines be allowed to kill?

    ✅ The risk of AI making wars easier to start—will governments be more willing to fight if human soldiers aren’t at risk?

    ✅ The dangers of AI hacking and bias—what happens if an autonomous weapon is hijacked or makes the wrong call?

    ✅ The global debate on regulating AI weapons—should we ban them before it’s too late?

    🔹 Why This Matters:

    AI warfare is no longer science fiction. As nations race to develop intelligent weapons, we must ask hard questions now about the future we want to create. Will AI make wars safer or more dangerous? And once we give machines the power to kill, can we ever take it back?

    🔹 Join the Conversation:

    💬 What do you think? Should AI weapons be banned, or can they be used ethically? Let us know your thoughts!

    📩 Have a question or topic suggestion? Email us at poddy.mouth.unorthodox@gmail.com

    🎧 Listen Now on Apple Podcasts, Spotify, or wherever you get your podcasts!

    #AIwarfare #EthicsOfAI #AutonomousWeapons #FutureOfWar #UnorthodoxPodcast

    Show More Show Less
    38 mins
  • The right to die
    Feb 25 2025

    Episode Overview:

    In this episode, we dive into the complex and deeply emotional debate on euthanasia. Starting with the UK’s ongoing discussions on legalizing assisted dying, we explore what euthanasia is, its global practices, and the ethical dilemmas it presents. From religious perspectives, especially biblical arguments, to philosophical debates on autonomy and suffering, we unpack both the concerns and the supporting viewpoints.

    Key Topics Covered:

    • The UK’s current debate on assisted dying laws

    • Defining euthanasia: voluntary, non-voluntary, and involuntary

    • Countries where euthanasia is legal and societal impacts

    • Ethical conflicts: autonomy vs. sanctity of life

    • Biblical perspectives and religious concerns

    • Supporting arguments: compassion, dignity, and personal choice

    • Practical considerations in legislation and healthcare

    • How societies can ethically navigate end-of-life decisions

    Takeaway Question for Listeners:

    “Where should we draw the line between compassion and moral responsibility when it comes to choosing how we die?”

    Call to Action:

    If you found this episode thought-provoking, share it with a friend and join the conversation on our socials. Let’s talk about life, death, and the choices in between.

    Show More Show Less
    42 mins
  • Artificial Intelligence: Genetically modifying humans
    Feb 18 2025

    What is AI in Genetics?

    1. Keeping Genetic Information Safe AI needs lots of genetic information to work. But who gets to keep this information? And how do we make sure it doesn’t get stolen? If someone gets their DNA data hacked, it could be used against them, like making it harder to get a job or health insurance.Real-Life Example: In 2018, the genetic testing company MyHeritage suffered a data breach affecting over 92 million users. This raised concerns about the safety of sensitive genetic data and how companies should protect it (Source: BBC News).
    2. Fairness in AI Predictions AI can sometimes be unfair. If it mostly learns from certain groups of people, it might not work well for everyone. That means some people could get bad medical predictions or treatments that aren’t right for them. Scientists need to make sure AI learns from all kinds of people, not just a few.Real-Life Example: Studies have shown that many AI-driven medical algorithms have been trained primarily on data from people of European descent, making them less effective for individuals from other racial and ethnic backgrounds (Source: Nature Medicine, 2020).
    3. Should We Edit Genes? AI is helping with gene editing, which means changing parts of our DNA. This could help cure diseases, but what if people start using it to choose things like eye color, height, or intelligence? That could make life really unfair, especially if only rich people can afford it.Real-Life Example: In 2018, Chinese scientist He Jiankui announced that he had used CRISPR gene editing on human embryos to make them resistant to HIV. This sparked outrage worldwide because it raised ethical concerns about the risks and consequences of modifying human DNA (Source: The Guardian).
    4. Making Sure People Understand AI If a doctor or scientist uses AI to learn about someone’s genes, they need to explain it in a way people can understand. It’s important that people know what’s happening with their DNA and that they agree to it. No one should feel confused or tricked.Real-Life Example: A 2021 study found that many people who take direct-to-consumer genetic tests (like 23andMe) don’t fully understand what their results mean. Some misinterpret their risk for diseases, leading to unnecessary anxiety or false reassurance (Source: JAMA Network).
    5. Rules to Keep Things Fair Governments and scientists need to work together to make sure AI is used in the right way. But there’s a tricky balance—too many rules might slow down cool discoveries, but too few rules could lead to big problems. Finding the right balance is super important.Real-Life Example: The European Union has proposed strict AI regulations to prevent misuse in healthcare and genetics, but some experts worry that too many rules could stifle innovation (Source: European Commission Report, 2022).

    What’s Next? AI and genetics together could change the world! We could cure diseases and help people live longer, healthier lives. But we have to make sure these tools are used in fair and responsible ways. We all need to think about how to keep things safe and equal for everyone.

    Closing: So, what do you think? How should we use AI in genetics? It’s exciting but also a little scary, right? The future is being built right now, and we all have a part to play in making sure it’s fair and good for everyone. Thanks for hanging out and learning about this with me today!

    Show More Show Less
    43 mins
No reviews yet