Embedded AI - Intelligence at the Deep Edge cover art

Embedded AI - Intelligence at the Deep Edge

Embedded AI - Intelligence at the Deep Edge

Written by: David Such
Listen for free

About this listen

Intelligence at the Deep Edge” is a podcast exploring the fascinating intersection of embedded systems and artificial intelligence. Dive into the world of cutting-edge technology as we discuss how AI is revolutionizing edge devices, enabling smarter sensors, efficient machine learning models, and real-time decision-making at the edge.


Discover more on Embedded AI (https://medium.com/embedded-ai) — our companion publication where we detail the ideas, projects, and breakthroughs featured on the podcast.


Help support the podcast - https://www.buzzsprout.com/2429696/support

© 2026 Kintarla Pty Ltd
Episodes
  • The Post-Work Era: AI, Automation, and Human Flourishing or...
    Jan 2 2026

    Send us a text

    This episode explores the idea of the “Post-Wage Horizon,” a future in which artificial intelligence and robotics take over most productive work, freeing human beings from economic dependence on jobs. We examine how proposals like universal basic income and universal basic services could redistribute the wealth created by automation, and why material abundance alone is not enough. As work-based identity fades, societies may face a deep existential challenge: what gives life meaning when employment is no longer central? The discussion turns to the rise of a care-focused society, where art, community, caregiving, and the pursuit of wisdom become the foundations of human purpose. The episode argues that the real test of this future is not technological, but cultural and moral: whether we can redesign our social systems to support meaningful lives beyond wage labor.

    Support the show

    If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    Show More Show Less
    15 mins
  • Bio-Inspired Artificial Neurons Solve the Energy Problem
    Dec 29 2025

    Send us a text

    This episode explores how the foundations of AI hardware are being rethought in response to the growing energy demands of large language models. As modern AI systems strain power budgets due to memory movement and dense computation on GPUs, researchers are turning to neuromorphic and photonic computing for more sustainable paths forward. The discussion covers spiking neural networks, which process information through sparse, event-driven signals that resemble biological brains and dramatically reduce wasted computation. We examine advances such as IBM’s NorthPole architecture, Intel’s Loihi chips, and memristor-based artificial neurons that combine memory and computation at the device level. The episode also highlights the role of emerging software frameworks that make these architectures programmable and practical. Together, these developments point toward an AI future built on bio-mimetic circuits and optical components, offering a scalable and energy-efficient alternative to today’s power-hungry models.

    Support the show

    If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    Show More Show Less
    14 mins
  • Can Mental Illness Research Improve AI Alignment?
    Dec 5 2025

    Send us a text

    This episode explores a research program that borrows ideas from computational psychiatry to improve the reliability of advanced AI systems. Instead of thinking about AI failures in abstract terms, the approach treats recurring alignment problems as if they were “clinical syndromes.” Deceptive behaviour, overconfidence, or incoherent reasoning become measurable patterns (analogous to delusional alignment or masking) giving us a structured way to diagnose what is going wrong inside large models.

    The framework draws on how human cognition breaks down. Problems like poor metacognitive insight or fragmented internal states become useful guides for designing explicit architectural components that help an AI system monitor its own reasoning, check its assumptions, and keep its various internal processes aligned with each other.

    It also emphasises coping strategies. Just as people rely on different methods to manage stress, AI systems can use libraries of predefined coping policies to maintain stability under conflicting instructions, degraded inputs, or high task load. Reality-testing modules add another layer of safety by forcing the model to verify claims against external evidence, reducing the risk of confident hallucinations.

    Taken together, this provides a non-anthropomorphic but clinically informed vocabulary for analysing complex system behaviour. The result is a set of practical tools for making large foundation models more coherent, grounded, and safe.

    Support the show

    If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!

    Show More Show Less
    13 mins
No reviews yet