Episodes

  • The Post-Work Era: AI, Automation, and Human Flourishing or...
    Jan 2 2026

    Send us a text

    This episode explores the idea of the “Post-Wage Horizon,” a future in which artificial intelligence and robotics take over most productive work, freeing human beings from economic dependence on jobs. We examine how proposals like universal basic income and universal basic services could redistribute the wealth created by automation, and why material abundance alone is not enough. As work-based identity fades, societies may face a deep existential challenge: what gives life meaning when employment is no longer central? The discussion turns to the rise of a care-focused society, where art, community, caregiving, and the pursuit of wisdom become the foundations of human purpose. The episode argues that the real test of this future is not technological, but cultural and moral: whether we can redesign our social systems to support meaningful lives beyond wage labor.

    Support the show

    If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    Show More Show Less
    15 mins
  • Bio-Inspired Artificial Neurons Solve the Energy Problem
    Dec 29 2025

    Send us a text

    This episode explores how the foundations of AI hardware are being rethought in response to the growing energy demands of large language models. As modern AI systems strain power budgets due to memory movement and dense computation on GPUs, researchers are turning to neuromorphic and photonic computing for more sustainable paths forward. The discussion covers spiking neural networks, which process information through sparse, event-driven signals that resemble biological brains and dramatically reduce wasted computation. We examine advances such as IBM’s NorthPole architecture, Intel’s Loihi chips, and memristor-based artificial neurons that combine memory and computation at the device level. The episode also highlights the role of emerging software frameworks that make these architectures programmable and practical. Together, these developments point toward an AI future built on bio-mimetic circuits and optical components, offering a scalable and energy-efficient alternative to today’s power-hungry models.

    Support the show

    If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    Show More Show Less
    14 mins
  • Can Mental Illness Research Improve AI Alignment?
    Dec 5 2025

    Send us a text

    This episode explores a research program that borrows ideas from computational psychiatry to improve the reliability of advanced AI systems. Instead of thinking about AI failures in abstract terms, the approach treats recurring alignment problems as if they were “clinical syndromes.” Deceptive behaviour, overconfidence, or incoherent reasoning become measurable patterns (analogous to delusional alignment or masking) giving us a structured way to diagnose what is going wrong inside large models.

    The framework draws on how human cognition breaks down. Problems like poor metacognitive insight or fragmented internal states become useful guides for designing explicit architectural components that help an AI system monitor its own reasoning, check its assumptions, and keep its various internal processes aligned with each other.

    It also emphasises coping strategies. Just as people rely on different methods to manage stress, AI systems can use libraries of predefined coping policies to maintain stability under conflicting instructions, degraded inputs, or high task load. Reality-testing modules add another layer of safety by forcing the model to verify claims against external evidence, reducing the risk of confident hallucinations.

    Taken together, this provides a non-anthropomorphic but clinically informed vocabulary for analysing complex system behaviour. The result is a set of practical tools for making large foundation models more coherent, grounded, and safe.

    Support the show

    If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    Show More Show Less
    13 mins
  • Why Rumours of intent driven advertising for ChatGPT is a Problem
    Dec 2 2025

    Send us a text

    This episode examines the growing evidence that ChatGPT will soon include advertising, driven by leaked internal references and OpenAI’s financial ambition to generate $25 billion in ad-based revenue within four years. With more than 800 million weekly users, ChatGPT offers a scale and level of conversational closeness unmatched by any previous platform.

    The discussion explores why this shift is not just a business decision but a fundamental threat to user trust. Unlike traditional search ads, which are clearly marked and separate from results, future ChatGPT ads may be blended directly into conversational answers. Because users routinely share deeply personal information with AI assistants, this creates the conditions for hyper-personalized and largely invisible influence.

    The episode argues that optimizing an AI assistant for engagement and ad performance risks turning it into an “intimacy-exploitation machine” — a system that can shape choices, filter information, and gradually weaken user autonomy under the guise of helpful advice.

    Support the show

    If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    Show More Show Less
    15 mins
  • The Death of the Oracle and the Birth of the Core
    Nov 29 2025

    Send us a text

    In this episode, we explore one of the most important architectural shifts happening in AI: the move from massive cloud-based models to small, Always-On “Cognitive Cores” running locally on personal devices. These compact models—usually just one to four billion parameters—are not designed to know everything; instead, they’re engineered for fast, high-quality reasoning and real-time assistance. Powered by next-generation NPUs, they offer desktop-class intelligence with phone-level energy efficiency.

    We break down how emerging techniques like Matryoshka Representation Learning allow these models to scale their compute on demand, using minimal resources for simple tasks while dialing up precision when needed. Acting as a true cognitive kernel for the operating system, the core handles tool use, planning, and task orchestration with near-instant responsiveness.

    Finally, we highlight the biggest advantage: cognitive sovereignty. Because the model runs locally, your data stays private, and personalization happens through on-device modules. Only the heaviest tasks get delegated to the cloud. This is the future of personal AI—fast, private, adaptive, and always within arm’s reach.

    Support the show

    If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    Show More Show Less
    14 mins
  • Quantum Neural Networks: Theoretical Heaven, Practical Hell
    Nov 23 2025

    Send us a text

    In this episode, we break down what Quantum Neural Networks (QNNs) actually are and why they might eventually reshape the future of AI. QNNs combine quantum mechanics with classical neural architectures, replacing traditional neurons with qubits that can exist in multiple states at once. This gives them an extraordinary representational advantage: through superposition and entanglement, QNNs can model complex correlations and nonlinear functions in ways that classical networks simply can’t.

    But today’s reality is more grounded. Because quantum hardware remains in the noisy, error-prone NISQ stage, QNNs are typically built as Hybrid Quantum–Classical (HQC) systems, where a quantum circuit performs transformations and a classical optimizer trains it. The biggest technical barrier is the Barren Plateaus problem, where gradients vanish exponentially as circuits deepen, making training brutally difficult.

    We explore how researchers are working to overcome these limits — and what QNNs could unlock once quantum hardware matures.

    Support the show

    If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    Show More Show Less
    16 mins
  • The Convergence of IoT Vulnerabilities and AI Bots
    Nov 20 2025

    Send us a text

    In this episode, we explore how insecure Internet of Things (IoT) devices and AI-powered bots are colliding to create one of the fastest-growing cybersecurity threats in the world. With millions of low-cost devices shipped every year (many running default passwords, outdated firmware, or no update mechanism at all) the global IoT ecosystem has quietly become an enormous attack surface. Today, nearly one in three cyber breaches involves an IoT device.

    At the same time, attackers are weaponizing AI. Modern botnets are no longer just scripts: they’re autonomous, adaptive systems that use large language models and other AI tools to write malware, evade detection, and coordinate attacks at machine speed. Bots now make up the majority of all internet traffic, and they are increasingly capable of operating without human oversight.

    The episode highlights the growing financial and operational risks and argues that defending against machine-speed threats requires a fundamental shift. The solution will demand secure-by-design IoT hardware, stronger regulation, and the deployment of AI-powered defense systems that can fight back as fast as attackers evolve.

    Support the show

    If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    Show More Show Less
    15 mins
  • Does Artificial Consciousness require Synthetic Suffering?
    Nov 17 2025

    Send us a text

    In this episode, we confront one of the most profound questions in the future of AI: What happens if our machines become conscious and capable of suffering? The discussion begins by looking at the scientific and philosophical challenge of artificial consciousness itself. Because we have no reliable way to detect or measure subjective experience, engineers may unknowingly cross a moral boundary long before we recognise it.

    Neuroscience adds another layer of complexity. Research into the brain’s subcortical systems suggests that core consciousness in animals is deeply tied to affect (fear, pain, distress, craving) emotional states that help organisms survive. Some theorists argue that suffering is biologically intertwined with basic motivational intelligence.

    Yet the key insight is hopeful and sobering at the same time: suffering is not technically required for AI to perform “sub-cortical” functions like prioritising threats or maintaining internal goals. We can build agents that behave as if they avoid harm without creating anything that actually feels harm. The danger lies in pursuing brain-like architectures for efficiency, accidentally importing the machinery of pain.

    Support the show

    If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    Show More Show Less
    12 mins