Doom Debates!

Liron Shapira
Doom Debates!
Último episodio

142 episodios

  • Doom Debates!

    Talking AI Doom with Dr. Claire Berlinski & Friends

    12/03/2026 | 1 h 26 min
    Dr. Claire Berlinski is a journalist, Oxford PhD, and author of The Cosmopolitan Globalist.
    She invited me to her weekly symposium to make the case for AI as an existential risk.
    Can we convince her sharp, skeptical audience that P(Doom) is high?
    Subscribe to The Cosmopolitan Globalist: https://claireberlinski.substack.com/
    Follow Claire on X: https://x.com/ClaireBerlinski
    “If Anyone Builds It, Everyone Dies” by Eliezer Yudkowsky & Nate Soares — https://ifanyonebuildsit.com
    Timestamps
    00:00:00 — Introduction
    00:02:10 — Welcome and Setting the Stage
    00:06:16 — Outcome Steering: The Magic of Intelligence
    00:10:40 — Collective Intelligence and the Path to ASI
    00:12:53 — The Five-Point Argument
    00:14:56 — The Alignment Problem and Control
    00:17:56 — The Genie Problem and Recursive Self-Improvement
    00:20:38 — Timeline: Five Years or Fifty?
    00:26:14 — Social Revolution and Pausing AI
    00:28:54 — Energy Constraints and Resource Limits
    00:31:23 — Morality, Empathy, and Superintelligence
    00:37:45 — How AI Is Actually Built
    00:38:31 — Computational Irreducibility and Co-Evolution
    00:44:57 — Foom and the Discontinuity Question
    00:46:44 — US-China Rivalry and the Arms Race
    00:49:36 — The Co-Evolution Argument
    00:55:36 — Alignment as Psychoanalysis
    00:57:24 — Anthropic’s “Harmless Slop” Paper
    01:00:33 — Policy Solutions: The Pause Button
    01:04:47 — Military AI and the Singularity
    01:07:10 — Cognitive Obstacles and Doom Fatigue
    01:09:07 — Why People Don’t Act
    01:13:00 — Reaching Representatives and Building a Platform
    01:17:12 — Sam Altman and the Manhattan Project Parallel
    01:19:14 — Community Building and Pause AI
    01:22:07 — Call to Action and Closing
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    How Friendly AI Will Become Deadly — Dr. Steven Byrnes (AGI Safety Researcher, Harvard Physics Postdoc) Returns!

    10/03/2026 | 1 h 28 min
    Fan favorite Dr. Steven Byrnes returns to discuss recent AI progress and the concerning paradigm shift to "ruthless sociopath AI" he sees on the horizon.
    Steven Byrnes, UC Berkeley physics PhD and Harvard physics postdoc, is an AI safety researcher at the Astera Institute and one of the most rigorous thinkers working on the technical AI alignment problem.
    Timestamps
    00:00:00 — Cold Open
    00:00:48 — Welcoming Back the Returning Champion
    00:02:38 — Research Update: What's New in The Last 6 Months
    00:04:31 — The Rise of AI Agents
    00:07:49 — What's Your P(Doom)?™
    00:13:42 — "Brain-Like AGI": The Next Generation of AI
    00:17:01 — Can LLMs Ever Match the Human Brain?
    00:31:51 — Will AI Kill Us Before It Takes Our Jobs?
    00:36:12 — Country of Geniuses in a Data Center
    00:41:34 — Why We Should Expect "Ruthless Sociopathic" ASI
    00:54:15 — Post-Training & RLVR — A "Thin Layer" of Real Intelligence
    01:02:32 — Consequentialism and the Path to Superintelligence
    01:17:02 — Airplanes vs. Rockets: An Analogy for AI
    01:24:33 — FOOM and Recursive Self-Improvement
    Links
    Steven Byrnes’ Website & Research— https://sjbyrnes.com/
    Steve’s X—https://x.com/steve47285
    Astera Institute—https://astera.org/
    “Why We Should Expect Ruthless Sociopath ASI” — https://www.lesswrong.com/posts/ZJZZEuPFKeEdkrRyf/why-we-should-expect-ruthless-sociopath-asi
    Intro to Brain-Like-AGI Safety—https://www.alignmentforum.org/s/HzcM2dkCq7fwXBej8
    Steve on LessWrong—https://www.lesswrong.com/users/steve2152
    AI 2027 — Scenario Timeline — https://ai-2027.com/
    Part 1: “The Man Who Might SOLVE AI Alignment”—
    https://www.youtube.com/watch?v=_ZRUq3VEAc0
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    Q&A — Claude Code's Impact, Anthropic vs The Pentagon, Roko('s Basilisk) Returns + Liron Updates His Views!

    05/03/2026 | 2 h 19 min
    Multiple live callers join this month's Q&A as we cover the imminent demise of programming as a profession, the Anthropic/Pentagon showdown, and debate the finer details of wireheading.
    I clarify my recent AI doom belief updates, and then the man behind Roko's Basilisk crashes the stream to argue I haven't updated nearly far enough.
    Timestamps
    00:00:00 — Cold Open
    00:00:56 — Welcome to the Livestream & Taking Questions from Chat
    00:12:44 — Anonymous Caller Asks If Rationalists Should Prioritize Attention-Grabbing Protests
    00:18:30 — The Good Case Scenario
    00:26:00 — Hugh Chungus Joins the Stream
    00:30:54 — Producer Ori, Liron's Recent Alignment Updates
    00:43:47 — We're In an Era of Centaurs
    00:47:40 — Noah Smith's Updates on AGI and Alignment
    00:48:44 — Co Co Chats Cybersecurity
    00:57:32 — The Attacker's Advantage in Offense/Defense Balance
    01:02:55 — Anthropic vs The Pentagon
    01:06:20 — "We're Getting Frog Boiled"
    01:11:06 — Stoner AI & Debating the Finer Points of Wireheading
    01:25:00 — A Caller Backs the Penrose Argument
    01:34:01 — Greyson Dials In
    01:40:21 — Surprise Guest Joins & Says Alignment Isn't a Problem
    02:05:15 — More Q&A with Chat
    02:14:26 — Closing Thoughts
    Links
    * Liron on X — https://x.com/liron
    * AI 2027 — https://ai-2027.com/
    * “Good Luck, Have Fun, Don’t Die” (film) — https://www.imdb.com/title/tt38301748/
    * “The AI Doc” (film) — https://www.focusfeatures.com/the-ai-doc-or-how-i-became-an-apocaloptimist
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    AI Will Take Our Jobs But SPARE Our Lives —Top AI Professor Moshe Vardi (Rice University)

    03/03/2026 | 1 h 7 min
    Professor Moshe Vardi thinks AI will kill us with kindness by automating away our jobs. I think they'll just kill us for real.
    Who’s right? Tune into this episode and decide where you get off the Doom Train™.
    Some highlights of Professor Vardi’s impressive CV:
    * University Professor at Rice — a rare distinction that lets him teach in any department.
    * 65,000+ citations, an H-index above 100, and nearly 50 years spent mechanizing reasoning, which makes him one of the most decorated computer scientists alive.
    * He ran the ACM’s flagship publication for a decade, and now bridges CS and policy at Rice’s Baker Institute.
    * He has been sounding the alarm on AI-driven job automation for over ten years.
    * He signed the 2023 AI extinction risk statement, and calls himself “part of the resistance.”
    Links
    * Moshe Vardi’s Wikipedia — https://en.wikipedia.org/wiki/Moshe_Vardi
    * Moshe Vardi, Rice University — https://profiles.rice.edu/faculty/moshe-y-vardi
    * Baker Institute for Public Policy — https://www.bakerinstitute.org/
    * Nick Bostrom, Deep Utopia: Life and Meaning in a Solved World — https://www.amazon.com/Deep-Utopia-Life-Meaning-Solved/dp/1646871642
    * Joseph Aoun, Robot-Proof: Higher Education in the Age of Artificial Intelligence — https://www.amazon.com/Robot-Proof-Higher-Education-Artificial-Intelligence/dp/0262535971
    Timestamps
    00:00:00 — Cold Open
    00:00:54 — Introducing Professor Vardi
    00:02:01 — Professor Vardi’s Academic Focus: CS, AI, & Public Policy
    00:07:18 — What’s Your P(Doom)™?
    00:12:28 — We’re Not Doomed, “We’re Screwed”
    00:16:44 — AI’s Impact on Meaning & Purpose
    00:27:47 — Let’s Ride the Doom Train ™
    00:35:43 — The Future of Jobs
    00:39:24 — A Country of Geniuses in a Data Center
    00:41:04 — Corporations as Superintelligence
    00:45:49 — Agency, Consciousness, and the Limits of AI
    00:50:07 — The Mad Scientist Scenario
    00:54:02 — Could a Data Center of Geniuses Destroy Humanity?
    01:03:13 — The WALL-E Meme and Fun Theory
    01:04:01 — Why Professor Vardi Signed the AI Extinction Risk Statement
    01:06:02 — Wrap-Up + 1 Way Ticket to Doom
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe
  • Doom Debates!

    Destiny's Fans Challenged Me to an AI Doom Debate

    26/02/2026 | 38 min
    Fresh off my debate with Destiny, his Discord community invited me into their voice chat to talk about AI doom. Just like the man himself, his fans are sharp.
    Let's find out where they get off The Doom Train™.
    My recent debate with Destiny — https://www.youtube.com/watch?v=rNgffLZTeWw
    Timestamps
    00:00:00 — Cold Open
    00:00:54 — Liron Joins Destiny’s Discord
    00:02:21 — The AI Doom Premise
    00:03:27 — Defining Intelligence and Is An LLM Really AI?
    00:07:12 — Will AI Become Uncontrollable?
    00:12:44 — The AI Alignment Problem
    00:24:11 — The Difficulty of Pausing AI
    00:26:01 — AI vs The Human Brain
    00:32:41 — Future AI Capabilities, Steering Toward Goals, & Philosophical Disagreements
    Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.
    Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates, or to really take things to the next level: Donate 🙏


    Get full access to Doom Debates at lironshapira.substack.com/subscribe

Más podcasts de Economía y empresa

Acerca de Doom Debates!

It's time to talk about the end of the world. With your host, Liron Shapira. lironshapira.substack.com
Sitio web del podcast

Escucha Doom Debates!, Espresso Matutino El Podcast y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v8.7.2 | © 2007-2026 radio.de GmbH
Generated: 3/15/2026 - 10:07:31 AM