EP226 AI Supply Chain Security: Old Lessons, New Poisons, and Agentic Dreams
Guest: Christine Sizemore, Cloud Security Architect, Google Cloud Topics: Can you describe the key components of an AI software supply chain, and how do they compare to those in a traditional software supply chain? I hope folks listening have heard past episodes where we talked about poisoning training data. What are the other interesting and unexpected security challenges and threats associated with the AI software supply chain? We like to say that history might not repeat itself but it does rhyme – what are the rhyming patterns in security practices people need to be aware of when it comes to securing their AI supply chains? We’ve talked a lot about technology and process–what are the organizational pitfalls to avoid when developing AI software? What organizational "smells" are associated with irresponsible AI development? We are all hearing about agentic security – so can we just ask the AI to secure itself? Top 3 things to do to secure AI software supply chain for a typical org? Resources: Video “Securing AI Supply Chain: Like Software, Only Not” blog (and paper) “Securing the AI software supply chain” webcast EP210 Cloud Security Surprises: Real Stories, Real Lessons, Real "Oh No!" Moments Protect AI issue database “Staying on top of AI Developments” “Office of the CISO 2024 Year in Review: AI Trust and Security” “Your Roadmap to Secure AI: A Recap” (2024) "RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check" (references our "data as code" presentation)
--------
24:39
EP225 Cross-promotion: The Cyber-Savvy Boardroom Podcast: EP2 Christian Karam on the Use of AI
Hosts: David Homovich, Customer Advocacy Lead, Office of the CISO, Google Cloud Alicja Cade, Director, Office of the CISO, Google Cloud Guest: Christian Karam, Strategic Advisor and Investor Resources: EP2 Christian Karam on the Use of AI (as aired originally) The Cyber-Savvy Boardroom podcast site The Cyber-Savvy Boardroom podcast on Spotify The Cyber-Savvy Boardroom podcast on Apple Podcasts The Cyber-Savvy Boardroom podcast on YouTube Now hear this: A new podcast to help boards get cyber savvy (without the jargon) Board of Directors Insights Hub Guidance for Boards of Directors on How to Address AI Risk
--------
24:46
EP224 Protecting the Learning Machines: From AI Agents to Provenance in MLSecOps
Guest: Diana Kelley, CSO at Protect AI Topics: Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right? What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better when you do it? How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we? In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance? How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy? What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks? Top differences between LLM/chatbot AI security vs AI agent security? Resources: “Airline held liable for its chatbot giving passenger bad advice - what this means for travellers” “ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem’ Forever” Secure by Design for AI by Protect AI “Securing AI Supply Chain: Like Software, Only Not” OWASP Top 10 for Large Language Model Applications OWASP Top 10 for AI Agents (draft) MITRE ATLAS “Demystifying AI Security: New Paper on Real-World SAIF Applications” (and paper) LinkedIn Course: Security Risks in AI and ML: Categorizing Attacks and Failure Modes
--------
30:40
EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025
Guests: no guests, just us in the studio Topics: At RSA 2025, did we see solid, measurably better outcomes from AI use in security, or mostly just "sizzle" and good ideas with potential? Are the promises of an "AI SOC" repeating the mistakes seen with SOAR in previous years regarding fully automated security operations? Does "AI SOC" work according to RSA floor? How realistic is the vision expressed by some [yes, really!] that AI progress could lead to technical teams, including IT and security, shrinking dramatically or even to zero in a few years? Why do companies continue to rely on decades-old or “non-leading” security technologies, and what role does the concept of a "organizational change budget" play in this inertia? Is being "AI Native" fundamentally better for security technologies compared to adding AI capabilities to existing platforms, or is the jury still out? Got "an AI-native SIEM"? Be ready to explain how is yours better! Resources: EP172 RSA 2024: Separating AI Signal from Noise, SecOps Evolves, XDR Declines? EP119 RSA 2023 - What We Saw, What We Learned, and What We're Excited About EP70 Special - RSA 2022 Reflections - Securing the Past vs Securing the Future RSA (“RSAI”) Conference 2024 Powered by AI with AI on Top — AI Edition (Hey AI, Is This Enough AI?) [Anton’s RSA 2024 recap blog] New Paper: “Future of the SOC: Evolution or Optimization — Choose Your Path” (Paper 4 of 4.5) [talks about the change budget discussed]
--------
31:37
EP222 From Post-IR Lessons to Proactive Security: Deconstructing Mandiant M-Trends
Guests: Kirstie Failey @ Google Threat Intelligence Group Scott Runnels @ Mandiant Incident Response Topics: What is the hardest thing about turning distinct incident reports into a fun to read and useful report like M-Trends? How much are the lessons and recommendations skewed by the fact that they are all “post-IR” stories? Are “IR-derived” security lessons the best way to improve security? Isn’t this a bit like learning how to build safely from fires vs learning safety engineering? The report implies that F500 companies suffer from certain security issues despite their resources, does this automatically mean that smaller companies suffer from the same but more? "Dwell time" metrics sound obvious, but is there magic behind how this is done? Sometimes “dwell tie going down” is not automatically the defender’s win, right? What is the expected minimum dwell time? If “it depends”, then what does it depend on? Impactful outliers vs general trends (“by the numbers”), what teaches us more about security? Why do we seem to repeat the mistakes so much in security? Do we think it is useful to give the same advice repeatedly if the data implies that it is correct advice but people clearly do not do it? Resources: M-Trends 2025 report Mandiant Attack Lifecycle EP205 Cybersecurity Forecast 2025: Beyond the Hype and into the Reality EP147 Special: 2024 Security Forecast Report
Cloud Security Podcast by Google focuses on security in the cloud, delivering security from the cloud, and all things at the intersection of security and cloud. Of course, we will also cover what we are doing in Google Cloud to help keep our users' data safe and workloads secure.
We’re going to do our best to avoid security theater, and cut to the heart of real security questions and issues. Expect us to question threat models and ask if something is done for the data subject’s benefit or just for organizational benefit.
We hope you’ll join us if you’re interested in where technology overlaps with process and bumps up against organizational design. We’re hoping to attract listeners who are happy to hear conventional wisdom questioned, and who are curious about what lessons we can and can’t keep as the world moves from on-premises computing to cloud computing.