Dignity-centred AI with Lorenn Ruster cover art

Dignity-centred AI with Lorenn Ruster

Dignity-centred AI with Lorenn Ruster

Listen for free

View show details

About this listen

Lorenn Ruster is from the ANU School of Cybernetics and a responsible technology collaborator with the Centre for Public Impact. 

In this episode Lorenn emphasises recognising there are many possible futures, not just dystopian or utopian ones. Different people experience the present and future differently. Focusing on one 'terrifying' future risks creating helplessness, when WE can shape the future.

As entrepreneurs aim to scale technology, it's important they consider what future they're working toward and the implications. How we think and talk about the future, in society and organizations, shapes what happens.

Some of Lorenn's concerns include the 'mindlessness' of some tech development, the impact on marginalised groups neglected in design, and threats to human dignity from AI systems determining futures in ways affected groups don't agree with.

Lorenn's advice for responsible, dignity-centered AI:

1. Start with understanding what you're actually building - this simple first step is essential for responsible development.

2. Create reflection spaces - slow down, consider impacts and stakeholders, address biases. Make these conversations valued, incentivised and team-building.

3. Bring together teams in these reflection spaces - to understand themselves, each other, and collectively what they're building. This is key for responsibility and mindfulness of consequences.

Lorenn sees changing system dynamics as key to shifting responsible AI use and development. Leverage points include incentives, information flows and mindsets. Progress toward responsible AI can be indicated through dignity-lens questions, case studies, and more qualitative metrics around proximity to users and using both cognitive and intuitive intelligence.

Find more episodes of Mindful AI - https://www.zentermeditation.com/mindful-ai

No reviews yet