PodcastsTecnologíaFuture of Life Institute Podcast

Future of Life Institute Podcast

Future of Life Institute
Future of Life Institute Podcast
Último episodio

488 episodios

  • Future of Life Institute Podcast

    How AI Can Help Humanity Reason Better (with Oly Sourbut)

    20/1/2026 | 1 h 17 min
    Oly Sourbut is a researcher at the Future of Life Foundation. He joins the podcast to discuss AI for human reasoning. We examine tools that use AI to strengthen human judgment, from collective fact-checking and scenario planning to standards for honest AI reasoning and better coordination. We also discuss how we can keep humans central as AI scales, and what it would take to build trustworthy, society-wide sensemaking.
    LINKS:
    FLF organization site
    Oly Sourbut personal site
    CHAPTERS:
    (00:00) Episode Preview
    (01:03) FLF and human reasoning
    (08:21) Agents and epistemic virtues
    (22:16) Human use and atrophy
    (35:41) Abstraction and legible AI
    (47:03) Demand, trust and Wikipedia
    (57:21) Map of human reasoning
    (01:04:30) Negotiation, institutions and vision
    (01:15:42) How to get involved
    PRODUCED BY:
    https://aipodcast.ing
    SOCIAL LINKS:
    Website: https://podcast.futureoflife.org
    Twitter (FLI): https://x.com/FLI_org
    Twitter (Gus): https://x.com/gusdocker
    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    How to Avoid Two AI Catastrophes: Domination and Chaos (with Nora Ammann)

    07/1/2026 | 1 h 20 min
    Nora Ammann is a technical specialist at the Advanced Research and Invention Agency in the UK. She joins the podcast to discuss how to steer a slow AI takeoff toward resilient and cooperative futures. We examine risks of rogue AI and runaway competition, and how scalable oversight, formal guarantees and secure code could support AI-enabled R&D and critical infrastructure. Nora also explains AI-supported bargaining and public goods for stability.
    LINKS:
    Nora Ammann site
    ARIA safeguarded AI program page
    AI Resilience official site
    Gradual Disempowerment website
    CHAPTERS:
    (00:00) Episode Preview
    (01:00) Slow takeoff expectations
    (08:13) Domination versus chaos
    (17:18) Human-AI coalitions vision
    (28:14) Scaling oversight and agents
    (38:45) Formal specs and guarantees
    (51:10) Resilience in AI era
    (01:02:21) Defense-favored cyber systems
    (01:10:37) AI-enabled bargaining and trade
    PRODUCED BY:
    https://aipodcast.ing
    SOCIAL LINKS:
    Website: https://podcast.futureoflife.org
    Twitter (FLI): https://x.com/FLI_org
    Twitter (Gus): https://x.com/gusdocker
    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    How Humans Could Lose Power Without an AI Takeover (with David Duvenaud)

    23/12/2025 | 1 h 18 min
    David Duvenaud is an associate professor of computer science and statistics at the University of Toronto. He joins the podcast to discuss gradual disempowerment in a post-AGI world. We ask how humans could lose economic and political leverage without a sudden takeover, including how property rights could erode. Duvenaud describes how growth incentives shape culture, why aligning AI to humanity may become unpopular, and what better forecasting and governance might require.
    LINKS:
    David Duvenaud academic homepage
    Gradual Disempowerment
    The Post-AGI Workshop
    Post-AGI Studies Discord
    CHAPTERS:
    (00:00) Episode Preview
    (01:05) Introducing gradual disempowerment
    (06:06) Obsolete labor and UBI
    (14:29) Property, power, and control
    (23:38) Culture shifts toward AIs
    (34:34) States misalign without people
    (44:15) Competition and preservation tradeoffs
    (53:03) Building post-AGI studies
    (01:02:29) Forecasting and coordination tools
    (01:10:26) Human values and futures
    PRODUCED BY:
    https://aipodcast.ing
    SOCIAL LINKS:
    Website: https://podcast.futureoflife.org
    Twitter (FLI): https://x.com/FLI_org
    Twitter (Gus): https://x.com/gusdocker
    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    Why the AI Race Undermines Safety (with Steven Adler)

    12/12/2025 | 1 h 28 min
    Stephen Adler is a former safety researcher at OpenAI. He joins the podcast to discuss how to govern increasingly capable AI systems. The conversation covers competitive races between AI companies, limits of current testing and alignment, mental health harms from chatbots, economic shifts from AI labor, and what international rules and audits might be needed before training superintelligent models. 

    LINKS:
    Steven Adler's Substack: https://stevenadler.substack.com

    CHAPTERS:
    (00:00) Episode Preview
    (01:00) Race Dynamics And Safety
    (18:03) Chatbots And Mental Health
    (30:42) Models Outsmart Safety Tests
    (41:01) AI Swarms And Work
    (54:21) Human Bottlenecks And Oversight
    (01:06:23) Animals And Superintelligence
    (01:19:24) Safety Capabilities And Governance

    PRODUCED BY:
    https://aipodcast.ing

    SOCIAL LINKS:
    Website: https://podcast.futureoflife.org
    Twitter (FLI): https://x.com/FLI_org
    Twitter (Gus): https://x.com/gusdocker
    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP
  • Future of Life Institute Podcast

    Why OpenAI Is Trying to Silence Its Critics (with Tyler Johnston)

    27/11/2025 | 1 h 1 min
    Tyler Johnston is Executive Director of the Midas Project. He joins the podcast to discuss AI transparency and accountability. We explore applying animal rights watchdog tactics to AI companies, the OpenAI Files investigation, and OpenAI's subpoenas against nonprofit critics. Tyler discusses why transparency is crucial when technical safety solutions remain elusive and how public pressure can effectively challenge much larger companies.

    LINKS:
    The Midas Project Website
    Tyler Johnston's LinkedIn Profile

    CHAPTERS:
    (00:00) Episode Preview
    (01:06) Introducing the Midas Project
    (05:01) Shining a Light on AI
    (08:36) Industry Lockdown and Transparency
    (13:45) The OpenAI Files
    (20:55) Subpoenaed by OpenAI
    (29:10) Responding to the Subpoena
    (37:41) The Case for Transparency
    (44:30) Pricing Risk and Regulation
    (52:15) Measuring Transparency and Auditing
    (57:50) Hope for the Future

    PRODUCED BY:
    https://aipodcast.ing
    SOCIAL LINKS:
    Website: https://podcast.futureoflife.org
    Twitter (FLI): https://x.com/FLI_org
    Twitter (Gus): https://x.com/gusdocker
    LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
    YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
    Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
    Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP

Más podcasts de Tecnología

Acerca de Future of Life Institute Podcast

The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
Sitio web del podcast

Escucha Future of Life Institute Podcast, Nosotros Los Clones y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app

Future of Life Institute Podcast: Podcasts del grupo

Aplicaciones
Redes sociales
v8.3.1 | © 2007-2026 radio.de GmbH
Generated: 1/27/2026 - 6:31:52 AM