Powered by RND

Doom Debates

Liron Shapira
Doom Debates
Último episodio

Episodios disponibles

5 de 90
  • Top Professor Condemns AGI Development: “It’s Frankly Evil” — Geoffrey Miller
    Geoffrey Miller is an evolutionary psychologist at the University of New Mexico, bestselling author, and one of the world's leading experts on signaling theory and human sexual selection. His book "Mate" was hugely influential for me personally during my dating years, so I was thrilled to finally get him on the show.In this episode, Geoffrey drops a bombshell 50% P(Doom) assessment, coming from someone who wrote foundational papers on neural networks and genetic algorithms back in the '90s before pivoting to study human mating behavior for 30 years.What makes Geoffrey's doom perspective unique is that he thinks both inner and outer alignment might be unsolvable in principle, ever. He's also surprisingly bearish on AI's current value, arguing it hasn't been net positive for society yet despite the $14 billion in OpenAI revenue.We cover his fascinating intellectual journey from early AI researcher to pickup artist advisor to AI doomer, why Asperger's people make better psychology researchers, the polyamory scene in rationalist circles, and his surprisingly optimistic take on cooperating with China. Geoffrey brings a deeply humanist perspective. He genuinely loves human civilization as it is and sees no reason to rush toward our potential replacement.* 00:00:00 - Introducing Prof. Geoffrey Miller* 00:01:46 - Geoffrey’s intellectual career arc: AI → evolutionary psychology → back to AI* 00:03:43 - Signaling theory as the main theme driving his research* 00:05:04 - Why evolutionary psychology is legitimate science, not just speculation* 00:08:18 - Being a professor in the AI age and making courses "AI-proof"* 00:09:12 - Getting tenure in 2008 and using academic freedom responsibly* 00:11:01 - Student cheating epidemic with AI tools, going "fully medieval"* 00:13:28 - Should professors use AI for grading? (Geoffrey says no, would be unethical)* 00:23:06 - Coming out as Aspie and neurodiversity in academia* 00:29:15 - What is sex and its role in evolution (error correction vs. variation)* 00:34:06 - Sexual selection as an evolutionary "supercharger"* 00:37:25 - Dating advice, pickup artistry, and evolutionary psychology insights* 00:45:04 - Polyamory: Geoffrey’s experience and the rationalist connection* 00:50:96 - Why rationalists tend to be poly vs. Chesterton's fence on monogamy* 00:54:07 - The "primal" lifestyle and evolutionary medicine* 00:56:59 - How Iain M. Banks' Culture novels shaped Geoffrey’s AI thinking* 01:05:26 - What’s Your P(Doom)™* 01:08:04 - Main doom scenario: AI arms race leading to unaligned ASI* 01:14:10 - Bad actors problem: antinatalists, religious extremists, eco-alarmists* 01:21:13 - Inner vs. outer alignment - both may be unsolvable in principle* 01:23:56 - "What's the hurry?" - Why rush when alignment might take millennia?* 01:28:17 - Disagreement on whether AI has been net positive so far* 01:35:13 - Why AI won't magically solve longevity or other major problems* 01:37:56 - Unemployment doom and loss of human autonomy* 01:40:13 - Cosmic perspective: We could be "the baddies" spreading unaligned AI* 01:44:93 - "Humanity is doing incredibly well" - no need for Hail Mary AI* 01:49:01 - Why ASI might be bad at solving alignment (lacks human cultural wisdom)* 01:52:06 - China cooperation: "Whoever builds ASI first loses"* 01:55:19 - Liron’s OutroShow NotesLinks* Geoffrey’s Twitter* Geoffrey’s University of New Mexico Faculty Page* Geoffrey’s Publications* Designing Neural Networks using Genetic Algorithms - His most cited paper* Geoffrey’s Effective Altruism Forum PostsBooks by Geoffrey Miller* Mate: Become the Man Women Want (2015) - Co-authored with Tucker Max* The Mating Mind: How Sexual Choice Shaped the Evolution of Human Nature (2000)* Virtue Signaling: Essays on Darwinian Politics and Free Speech (2019)* Spent: Sex, Evolution, and Consumer Behavior (2009)Related Doom Debates Episodes* Liam Robins on College in the AGI Era - Student perspective on AI cheating* Liron Reacts to Steven Pinker on AI Risk - Critiquing Pinker's AI optimism* Steven Byrnes on Brain-Like AGI - Upcoming episode on human brain architecture---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:57:01
  • Zuck’s Superintelligence Agenda is a SCANDAL | Warning Shots EP1
    I’m doing a new weekly show on the AI Risk Network called Warning Shots. Check it out!I’m only cross-posting the first episode here on Doom Debates. You can watch future episodes by subscribing to the AI Risk Network channel.This week's warning shot: Mark Zuckerberg announced that Meta is racing toward recursive self-improvement and superintelligence. His exact words: "Developing superintelligence is now in sight and we just want to make sure that we really strengthen the effort as much as possible to go for it." This should be front-page news. Instead, everyone's talking about some CEO's dumb shenanigans at a Coldplay concert.Recursive self-improvement is when AI systems start upgrading themselves - potentially the last invention humanity ever makes. Every AI safety expert knows this is a bright red line. And Zuckerberg just said he's sprinting toward it. In a sane world, he'd have to resign for saying this. That's why we made this show - to document these warning shots as they happen, because someone needs to be paying attention* 00:00 - Opening comments about Zuckerberg and superintelligence* 00:51 - Show introductions and host backgrounds* 01:56 - Geoff Lewis psychotic episode and ChatGPT interaction discussion* 05:04 - Transition to main warning shot about Mark Zuckerberg* 05:32 - Zuckerberg's recursive self-improvement audio clip* 08:22 - Second Zuckerberg clip about "going for superintelligence"* 10:29 - Analysis of "superintelligence in everyone's pocket"* 13:07 - Discussion of Zuckerberg's true motivations* 15:13 - Nuclear development analogy and historical context* 17:39 - What should happen in a sane society (wrap-up)* 20:01 - Final thoughts and sign-offShow NotesHosts:* Doom Debates - Liron Shapira's channel* AI Risk Network - John Sherman's channel* Lethal Intelligence - Michael's animated AI safety contentThis Episode's Warning Shots:* Mark Zuckerberg podcast appearance discussing superintelligence* Geoff Lewis (Bedrock VC) Twitter breakdown---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    20:18
  • Rationalist Podcasts Unite! — The Bayesian Conspiracy ⨉ Doom Debates Crossover
    Eneasz Brodski and Steven Zuber host the Bayesian Conspiracy podcast, which has been running for nine years and covers rationalist topics from AI safety to social dynamics. They're both OG rationalists who've been in the community since the early LessWrong days around 2007-2010. I've been listening to their show since the beginning, and finally got to meet my podcast heroes!In this episode, we get deep into the personal side of having a high P(Doom) — how do you actually live a good life when you think there's a 50% chance civilization ends by 2040? We also debate whether spreading doom awareness helps humanity or just makes people miserable, with Eneasz pushing back on my fearmongering approach.We also cover my Doom Train framework for systematically walking through AI risk arguments, why most guests never change their minds during debates, the sorry state of discourse on tech Twitter, and how rationalists can communicate better with normies. Plus some great stories from the early LessWrong era, including my time sitting next to Eliezer while he wrote Harry Potter and the Methods of Rationality.* 00:00 - Opening and introductions* 00:43 - Origin stories: How we all got into rationalism and LessWrong* 03:42 - Liron's incredible story: Sitting next to Eliezer while he wrote HPMOR* 06:19 - AI awakening moments: ChatGPT, AlphaGo, and move 37* 13:48 - Do AIs really "understand" meaning? Symbol grounding and consciousness* 26:21 - Liron's 50% P(Doom) by 2040 and the Doom Debates mission* 29:05 - The fear mongering debate: Does spreading doom awareness hurt people?* 34:43 - "Would you give 95% of people 95% P(Doom)?" - The recoil problem* 42:02 - How to live a good life with high P(Doom)* 45:55 - Economic disruption predictions and Liron's failed unemployment forecast* 57:19 - The Doom Debates project: 30,000 watch hours and growing* 58:43 - The Doom Train framework: Mapping the stops where people get off* 1:03:19 - Why guests never change their minds (and the one who did)* 1:07:08 - Communication advice: "Zooming out" for normies* 1:09:39 - The sorry state of arguments on tech Twitter* 1:24:11 - Do guests get mad? The hologram effect of debates* 1:30:11 - Show recommendations and final thoughtsShow NotesThe Bayesian Conspiracy — https://www.thebayesianconspiracy.comDoom Debates episode with Mike Israetel — https://www.youtube.com/watch?v=RaDWSPMdM4oDoom Debates episode with David Duvenaud — https://www.youtube.com/watch?v=mb9w7lFIHRMDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:34:26
  • His P(Doom) Doubles At The End — AI Safety Debate with Liam Robins, GWU Sophomore
    Liam Robins is a math major at George Washington University who's diving deep into AI policy and rationalist thinking.In Part 1, we explored how AI is transforming college life. Now in Part 2, we ride the Doom Train together to see if we can reconcile our P(Doom) estimates. 🚂Liam starts with a P(Doom) of just 3%, but as we go through the stops on the Doom Train, something interesting happens: he actually updates his beliefs in realtime!We get into heated philosophical territory around moral realism, psychopaths, and whether intelligence naturally yields moral goodness.By the end, Liam's P(Doom) jumps from 3% to 8% - one of the biggest belief updates I've ever witnessed on the show. We also explore his "Bayes factors" approach to forecasting, debate the reliability of superforecasters vs. AI insiders, and discuss why most AI policies should be Pareto optimal regardless of your P(Doom).This is rationality in action: watching someone systematically examine their beliefs, engage with counterarguments, and update accordingly.0:00 - Opening0:42 - What’s Your P(Doom)™01:18 - Stop 1: AGI timing (15% chance it's not coming soon)01:29 - Stop 2: Intelligence limits (1% chance AI can't exceed humans)01:38 - Stop 3: Physical threat assessment (1% chance AI won't be dangerous)02:14 - Stop 4: Intelligence yields moral goodness - the big debate begins04:42 - Moral realism vs. evolutionary explanations for morality06:43 - The psychopath problem: smart but immoral humans exist08:50 - Game theory and why psychopaths persist in populations10:21 - Liam's first major update: 30% down to 15-20% on moral goodness12:05 - Stop 5: Safe AI development process (20%)14:28 - Stop 6: Manageable capability growth (20%)15:38 - Stop 7: AI conquest intentions - breaking down into subcategories17:03 - Alignment by default vs. deliberate alignment efforts19:07 - Stop 8: Super alignment tractability (20%)20:49 - Stop 9: Post-alignment peace (80% - surprisingly optimistic)23:53 - Stop 10: Unaligned ASI mercy (1% - "just cope")25:47 - Stop 11: Epistemological concerns about doom predictions27:57 - Bayes factors analysis: Why Liam goes from 38% to 3%30:21 - Bayes factor 1: Historical precedent of doom predictions failing33:08 - Bayes factor 2: Superforecasters think we'll be fine39:23 - Bayes factor 3: AI insiders and government officials seem unconcerned45:49 - Challenging the insider knowledge argument with concrete examples48:47 - The privilege access epistemology debate56:02 - Major update: Liam revises base factors, P(Doom) jumps to 8%58:18 - Odds ratios vs. percentages: Why 3% to 8% is actually huge59:14 - AI policy discussion: Pareto optimal solutions across all P(Doom) levels1:01:59 - Why there's low-hanging fruit in AI policy regardless of your beliefs1:04:06 - Liam's future career plans in AI policy1:05:02 - Wrap-up and reflection on rationalist belief updatingShow Notes* Liam Robins on Substack -* Liam’s Doom Train post -* Liam’s Twitter - @liamhrobinsAnthropic's "Alignment Faking in Large Language Models" - The paper that updated Liam's beliefs on alignment by default---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:05:12
  • AI Won't Save Your Job — Liron Reacts to Replit CEO Amjad Masad
    Amjad Masad is the founder and CEO of Replit, a full-featured AI-powered software development platform whose revenue reportedly just shot up from $10M/yr to $100M/yr+.Last week, he went on Joe Rogan to share his vision that "everyone will become an entrepreneur" as AI automates away traditional jobs.In this episode, I break down why Amjad's optimistic predictions rely on abstract hand-waving rather than concrete reasoning. While Replit is genuinely impressive, his claims about AI limitations—that they can only "remix" and do "statistics" but can't "generalize" or create "paradigm shifts"—fall apart when applied to specific examples.We explore the entrepreneurial bias problem, why most people can't actually become successful entrepreneurs, and how Amjad's own success stories (like quality assurance automation) actually undermine his thesis. Plus: Roger Penrose's dubious consciousness theories, the "Duplo vs. Lego" problem in abstract thinking, and why Joe Rogan invited an AI doomer the very next day.00:00 - Opening and introduction to Amjad Masad03:15 - "Everyone will become an entrepreneur" - the core claim08:45 - Entrepreneurial bias: Why successful people think everyone can do what they do15:20 - The brainstorming challenge: Human vs. AI idea generation22:10 - "Statistical machines" and the remixing framework28:30 - The abstraction problem: Duplos vs. Legos in reasoning35:50 - Quantum mechanics and paradigm shifts: Why bring up Heisenberg?42:15 - Roger Penrose, Gödel's theorem, and consciousness theories52:30 - Creativity definitions and the moving goalposts58:45 - The consciousness non-sequitur and Silicon Valley "hubris"01:07:20 - Ahmad George success story: The best case for Replit01:12:40 - Job automation and the 50% reskilling assumption01:18:15 - Quality assurance jobs: Accidentally undermining your own thesis01:23:30 - Online learning and the contradiction in AI capabilities01:29:45 - Superintelligence definitions and learning in new environments01:35:20 - Self-play limitations and literature vs. programming01:41:10 - Marketing creativity and the Think Different campaign01:45:45 - Human-machine collaboration and the prompting bottleneck01:50:30 - Final analysis: Why this reasoning fails at specificity01:58:45 - Joe Rogan's real opinion: The Roman Yampolskiy follow-up02:02:30 - Closing thoughtsShow NotesSource video: Amjad Masad on Joe Rogan - July 2, 2025Roman Yampolskiy on Joe Rogan - https://www.youtube.com/watch?v=j2i9D24KQ5kReplit - https://replit.comAmjad’s Twitter - https://x.com/amasadDoom Debates episode where I react to Emmett Shear’s Softmax - https://www.youtube.com/watch?v=CBN1E1fvh2gDoom Debates episode where I react to Roger Penrose - https://www.youtube.com/watch?v=CBN1E1fvh2g---Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
    --------  
    1:45:48

Más podcasts de Economía y empresa

Acerca de Doom Debates

It's time to talk about the end of the world! lironshapira.substack.com
Sitio web del podcast

Escucha Doom Debates, Oso Trava Podcast y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v7.22.0 | © 2007-2025 radio.de GmbH
Generated: 7/30/2025 - 8:01:53 PM