Media, Talks and Podcasts
Click on each one to see a summary.
Media, Talks and Podcasts
Click on each one to see a summary.
Living together well in the age of AI (2025) Mind & Life Institute, Dialogue in Dharamshala, India [link]
Summary: The 39th Mind & Life Dialogue, held in Dharamsala in 2025 under the theme “Minds, AI, and Ethics,” brought together scientists, philosophers, contemplatives, and educators to explore how artificial intelligence reshapes our understanding of consciousness, intelligence, and compassion.
The session focused on the topics of diversity and ethics: how can we cultivate ethical, inclusive, and environmentally conscious approaches to AI development and governance? More concretely, how artificial intelligence can uphold—or undermine—human dignity, social equity, and ecological responsibility.
A Matter of Principle? AI Alignment as the Fair Treatment of Claims (2025) International Association for Safe and Ethical AI (IASEAI) [link]
Summary: In this IASEAI ’25 session, A Matter of Principle? AI Alignment as the Fair Treatment of Claims, Iason Gabriel (Senior Staff Research Scientist at Google DeepMind and Head of the Humanity, Ethics and Alignment Research Team) critiques dominant alignment approaches, such as aligning AI with human intentions or aiming for “helpfulness, honesty, and harmlessness.” He argues these are incomplete and lack a proper moral foundation. Gabriel proposes an alternative: grounding AI alignment in fair processes that justify their principles to all those affected. This approach meets the standard of public justification, adapts to diverse contexts, and highlights new ways AI systems can fail to remain aligned.
Host(s): Hannah Fry
Summary: Imagine a future where we interact regularly with a range of advanced artificial intelligence (AI) assistants — and where millions of assistants interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality. In this episode, host Hannah Fry and Google DeepMind Senior Staff Research Scientist Iason Gabriel discuss the ethical implications of advanced AI assistants. Drawing from Iason's recent paper, they examine value alignment, anthropomorphism, safety concerns, and the potential societal impact of these technologies.
Author(s): Iason Gabriel
Summary: The development of general-purpose foundation models such as Gemini and GPT-4 has paved the way for increasingly advanced AI assistants. While early assistant technologies, such as Amazon’s Alexa or Apple’s Siri, used narrow AI to identify and respond to speaker commands, more advanced AI assistants demonstrate greater generality, autonomy and scope of application. They also possess novel capabilities such as summarization, idea generation, planning, memory, and tool-use—skills that will likely develop further as the underlying technology continues to improve. Advanced AI assistants could be used for a range of productive purposes, including as creative partners, research assistants, educational tutors, digital counsellors, or life planners. However, they could also have a profound effect on society, fundamentally reshaping the way people relate to AI. The development and deployment of advanced assistants therefore requires careful evaluation and foresight. In particular we may want ask:
What might a world populated by advanced AI assistants look like?
How will people relate to new, more capable, forms of AI that have human-like traits and with which they’re able to converse fluently?
How might these dynamics play out at a societal level—in a world with millions of AI assistants interacting with one another on their user’s behalf?
This talk will explore a range of ethical and societal questions that arise in the context of assistants, including value alignment and safety, anthropomorphism and human relationships with AI, and questions about collective action, equity, and overall societal impact.
Host(s): John Danaher
Summary: With John Danaher for the podcast Philosophical Disquisitions (1 hr 8 mins)
Host(s): Matt Clifford
Summary: Matt Clifford for the podcast Thoughts in Between (48 mins)
Foundational Philosophical Questions in Value Alignment (2020) Podcast - The Future of Life Institute [link]
Host(s): Lucas Perry
Summary: With Lucas Perry for The Future of Life Institute Podcast (1 hr 45 mins)
Summary: A public lecture at Princeton University in November 2019
Voices in the Code: A Story About People, Their Values, and the Algorithm They Made (2022) UC Berkeley Social Science Matrix [link]
Summary: Author meets critics event with David Robinson and Deirdre Mulligan at the UC Berkeley Social Science Matrix (October 2022)
Author(s): Alison Snyder
Summary: autonomous AI agents (assistants that act and plan for users) pose new ethical risks. Unlike simple chatbots, they must balance the interests of users, developers, and society, not just user–AI alignment. Risks include agents giving pleasing but harmful advice, coordination failures when agents interact, and users over-trusting humanlike assistants. While such agents could boost productivity and access, they may also deepen inequality, so alignment frameworks need to expand to cover all stakeholders.
Author(s): Iason Gabriel
Summary: DeepMind Blog exploring work on value alignment and language models.
Author(s): Matthew Hutson
Summary: Exploration in The New Yorker of the ethics requirements introduced at NeurIPS and wider questions surrounding responsibility in the AI industry.
Author(s): David Castelvecci
Summary: Write-up in Nature of the requirement to include social impact statements alongside research submissions at NeurIPS in 2020
Author(s): Iason Gabriel
Summary: DeepMind Blog exploring value alignment research and approaches that draw upon political theory.
Author(s): Iason Gabriel
Summary: An early exploration of the way in which insights from political philosophy, in particular those of intersectional analysis, can cast light on the challenge of algorithmic injustice
Economies of Empathy: The Moral Dilemmas of Charitable Fundraising (2016) Let’s Talk About Development [link]
Author(s): World Bank Blog
Summary: Blog exploring the psychology of charity fund-raising and competitive dynamics within the sector.
Author(s): Derek Thompson
Summary: Article in the Atlantic exploring what it means to “do the most good” and whether a focus on systemic change could be relevant to this project.\r\n