On this episode Scott and Jhave talk about AI Consciousness, debating whether recent research into mechanistic interpretability suggests that frontier models are developing introspection, nascent sentience, and/or a "society of thought."
References
Aranyosi, M. (2026), Mechanistic Indicators of Understanding in Large Language Models https://arxiv.org/abs/2507.08017
Butlin, P. et al. (2023), Identifying Indicators of Consciousness in AI Systems, https://arxiv.org/abs/2308.08708
Berg, C et al. (2025), LLMs Report Subjective Experience under Self-Referential Processing https://arxiv.org/abs/2510.24797
DeepMind (2026), Reasoning Models Generate Societies of Thought, https://arxiv.org/abs/2601.10825
Descartes, R. (1637), Discourse on the Method
Johnston, D (2006) Consciousness, Understanding & Mechanistic Interpretability https://glia.ca/2026/hbf/
Johnston, D (2006) Is AI Conscious according to current criteria? https://glia.ca/2026/hbf/iac/
Nanda, N. (2025), Emergent Introspective Awareness in Large Language Models, https://www.anthropic.com/research
Schneiderman, D. (2026), The HUMAN Project