Mindful AI cover art

Mindful AI

Mindful AI

Written by: Zenter
Listen for free

About this listen

Mindful AI delves into the world of AI and its responsible use and development. In this podcast, we chat about the latest advancements in AI technology, and explore ways to develop and use AI mindfully and responsibly.2023 Economics Leadership Management & Leadership
Episodes
  • AI, Spirituality and Wisdom with Anuraj Gambhir
    Nov 21 2023

    Our guest, Anuraj Gambhir, is an internationally recognized Strategic Business/Startup Advisor, Technology Visionary, Exponential Thought Leader and multi-award winning Innovator. He has over 30 years’ global experience across 5 continents and is a trans-disciplinary expert. His practical knowledge spans executive management, innovation, entrepreneurship, conscious leadership, exponential technologies, design thinking and holistic wellbeing. 

    In this episode, Anuraj envisions AI used to elevate humanity through enhancing wellbeing, intelligence, and longevity, while harmonizing technology and spirituality. The right mindset and intentions are crucial before applying any technological tools. At the same time we need to make time for digital detoxes, mindfulness and nature connection to tap into your true self. The key is finding equilibrium between using AI consciously as a tool, while retaining our humanity. The technology itself is neutral - it depends on how we choose to apply it, either for personal development or societal betterment. 

    Show More Show Less
    28 mins
  • AI Legitimacy and Data Privacy with Ruth Marshall
    Aug 29 2023

    Mindful AI’s guest, Ruth Marshall works on real-world solutions for data privacy, Privacy Enhancing Technologies and frameworks and methodologies for the responsible use of data. Ruth has spent the past 25 years moving between collaborative research and corporate communities, with a background in software and Artificial Intelligence. In the earlier half of her career Ruth was responsible for product development and R&D in five software companies including Accenture, Harvey Norman, and Novartis Pharmaceuticals. She is now co-founder of a data literacy and ethics education initiative at Hocone, where she works with organisations to develop frameworks and education programs for the responsible use of data. She was also engaged by the NSW Government to outline an approach and framework for responsible data use across the organisation.

    In Episode 9, we chat about:

    • Ruth’s main concerns  around privacy and legitimacy (05:18): "I don't think people are even making assumptions right now about …whether the AI application that they're building, they have any legitimacy to do that. Are they the right person to do it?" 
    • Co-creation and getting input/feedback from affected groups is important for establishing legitimacy and trust. Constant feedback loops needed to flag issues.
    • Example of concern with legitimacy - a water charity using AI to understand water access in African communities but not considering if they are the right people to be telling these communities how to organize their lives (07:45)
    • Indigenous data sovereignty groups have long considered legitimacy an important concept regarding data and AI, stemming from illegitimate reorganization of their lives by European settlers. 
    • The need for more data literacy and ethics around data collection, preparation, provenance. Issues around representation, privacy, legitimacy of use. There's a proliferation of AI tools and models with little quality control (20:38).
    • Lack of professionalisation and standards in AI/software engineering. No curriculum requirements or ensuring baseline knowledge. Ruth suggests we need to move towards treating it as a profession with standards (21:46).
    • The need for balance between quality control/frameworks and not creating monopolies or barriers to entry. Favouring education over stringent restrictions.
    • On measuring outcomes: we need to refer back to original goals, but also monitor for unintended consequences using lived experience. Borrow from practices like post-market monitoring of drugs.
    • Models become outdated as world changes - we need ongoing external validation of algorithms, data, and real world interactions. Issues arise from changing context, not just the AI itself.
    • Overall importance of trust, transparency, co-creation with affected groups, adapting models to changing world, and ongoing review of intended and unintended outcomes

    In regard to AI competence vs performance, Ruth would like to credit Rodney Brooks for the ideas she referenced - please see further Brook’s article: https://spectrum.ieee.org/gpt-4-calm-down

    Show More Show Less
    41 mins
  • Blitzscaling and Mindful AI with Chris Yeh
    Aug 16 2023

    Chris Yeh (https://chrisyeh.com/) is the co-founder of the Blitzscaling Academy, which teaches individuals and organizations how to plan for and execute on hypergrowth. He’s also cofounder of Blitzscaling Ventures, which invests in the world's fastest-growing startups. Chris has founded, advised, or invested in over 100 high-tech startups since 1995, including companies like Ustream and UserTesting.com. 

    He is the co-author, along with Reid Hoffman, of Blitzscaling: The Lightning-fast Path to Building Massively Valuable Companies, and the co-author, along with Reid Hoffman and Ben Casnocha, of the New York Times bestseller, The Alliance: Managing Talent in the Networked Age. Chris earned two degrees from Stanford University, and an MBA from Harvard Business School, where he was named a Baker Scholar.

    Chris has practical experience working with AI, including co-authoring a book called Impromptu with GPT-4 and Reid Hoffman in early 2023. As an investor, he is interested in AI companies automating tedious and routine tasks, not just the "sexiest" ideas.

    In this episode:

    • Yeh says self-awareness is critical for founders to build positive impact companies. Understanding your own impact allows you to build sustainable products (04:43).
    • Yeh says at the highest level, we need to track whether AI actually creates greater value than the status quo, because at the end of the day, all of the civilization around us is the result of surplus added value (11:10)
    • Yeh thinks mindfulness is lacking from current AI. AI should be more aware of human emotions and mindful like Inflection AI's "PI" model (15:38).
    • If AI was to be built with compassion and mindfulness it could provide a tremendous benefit to humanity and help people going through difficult situations (16:35).
    • Overall there's need for AI to be designed thoughtfully with human values like mindfulness and compassion in mind, not just pure productivity.

    Listen to other episodes: https://zentermeditation.com/mindful-ai 

    Show More Show Less
    21 mins
No reviews yet