Powered by RND
PodcastsTecnologíaFuture of Data Security

Future of Data Security

Qohash
Future of Data Security
Último episodio

Episodios disponibles

5 de 28
  • EP 24 — Apiiro's Karen Cohen on Emerging Risk Types in AI-Generated Code
    AI coding assistants are generating pull requests with 3x more commits than human developers, creating a code review bottleneck that manual processes can't handle. Karen Cohen, VP of Product Management of Apiiro, warns how AI-generated code introduces different risk patterns, particularly around privilege management, that are harder to detect than traditional syntax errors. Her research shows the shift from surface-level bugs to deeper architectural vulnerabilities that slip through code reviews, making automation not just helpful but essential for security teams.   Karen’s framework for contextual risk assessment evaluates whether vulnerabilities are actually exploitable by checking if they're deployed, internet-exposed, and tied to sensitive data, moving beyond generic vulnerability scores to application-specific threat modeling. She argues developers overwhelmingly want to ship quality code, but security becomes another checkbox when leadership doesn't prioritize it alongside feature delivery.  Topics discussed: AI coding assistants generating 3x more commits per pull request, overwhelming manual code review processes and security gates. Shift from syntax-based vulnerabilities to privilege management risks in AI-generated code that are harder to identify during reviews. Implementing top-down and bottom-up security strategies to secure executive buy-in while building grassroots developer credibility and engagement. Contextual risk assessment framework evaluating deployment status, internet exposure, and secret validity to prioritize app-specific vulnerabilities beyond CVSS scores. Transitioning from siloed AppSec scanners to unified application risk graphs that connect vulnerabilities, APIs, PII, and AI agents. Developer overwhelm driving security deprioritization when leadership doesn't communicate how vulnerabilities impact real end users and business outcomes. Future of code security involving agentic systems that continuously scan using architecture context and real-time threat intelligence feeds. Balancing career growth by choosing scary positions with psychological safety and gaining experience as both independent contributor and team player.
    --------  
    20:22
  • EP 23 — IBM's Nic Chavez on Why Data Comes Before AI
    When IBM acquired Datastax, they inherited an experiment that proved something remarkable about enterprise AI adoption. Project Catalyst gave everyone in the company — not just engineers — a budget to build whatever they wanted using AI coding assistants. Nic Chavez, CISO of Data & AI, explains why this matters for the 99% of enterprise AI projects currently stuck in pilot purgatory: technical barriers for creating useful tools have collapsed.    As a member of the World Economic Forum's CISO reference group, Nic has visibility into how the world's largest organizations approach AI security. The unanimous concern is that employees are accidentally exfiltrating sensitive data into free LLMs faster than security teams can deploy internal alternatives. The winning strategy isn't blocking external AI tools, but deploying better internal options that employees actually want to use.   Topics discussed:   Why less than 1% of enterprise AI projects move from pilot to production. How vendor push versus customer pull dynamics create misalignment with overall enterprise strategy. The emergence of accidental data exfiltration as the primary AI security risk when employees dump confidential information into free LLMs. How Project Catalyst democratized AI development by giving non-technical employees budgets to build with coding assistants, proving the technical barrier for useful tool creation has dropped dramatically. The strategy of making enterprise AI "the cool house to hang out at" by deploying internal tools better than external options. Why the velocity gap between attackers and enterprises in AI deployment comes down to procurement cycles versus instant hacker decisions for deepfake creation. How the World Economic Forum's Chatham House rule enables CISOs from the world's largest companies to freely exchange ideas about AI governance without attribution concerns. The role of LLM optimization in preventing super intelligence trained on poison data by establishing data provenance verification. Why Anthropic's copyright settlement signals the end of the “ask forgiveness not permission” approach to training data sourcing. How edge intelligence versus cloud centralization decisions depend on data freshness requirements and whether streaming updates from vector databases can supplement local models.
    --------  
    31:34
  • EP 22 — Databricks' Omar Khawaja on Why Inertia Is Security's Greatest Enemy
    What if inertia — not attackers — is security's greatest enemy? At Databricks, CISO Omar Khawaja transformed this insight into a systematic approach that flips traditional security thinking on its head and treats employees as assets rather than threats.   Omar offers his T-junction methodology for breaking organizational inertia: instead of letting teams default to existing behaviors, he creates explicit decision points where continuing the status quo becomes impossible. This approach drove thousands of employees to voluntarily take optional security training in a single year.   There’s also Databricks' systematic response to AI security chaos. Rather than succumb to "top five AI risks" thinking, Omar's team catalogued 62 specific AI risks across four subsystems: data operations, model operations, serving layer, and unified governance. Their public Databricks AI Security Framework (DASF) provides enterprise-ready controls for each risk, moving beyond generic guidance to actionable frameworks that work regardless of whether you're a Databricks customer.   Topics discussed:   The T-Junction Framework to systematically break organizational inertia by eliminating default paths and forcing explicit decision-making Human risk management strategy of moving to behavior-driven programs that convert employees from liabilities to champions 62-Risk AI security classifications of data layer, model operations, serving layer, and governance risks with specific controls for each Methods for understanding true organizational risk appetite across business units, including the "double-check your math" approach Four-component agent definition and specific risks emerging from chain-of-thought reasoning and multi-system connectivity Why "AI strategy" creates shiny object syndrome and how to instead use AI to accelerate existing business strategy
    --------  
    31:34
  • EP 21 — Sendbird's Yashvier Kosaraju on Creating Shared Responsibility Models for AI Data Security
    Sendbird had AI agents take backend actions on behalf of customers while processing sensitive support data across multiple LLM providers. This required building contractual frameworks that prevent customer data from training generic models while maintaining the feedback loops needed for enterprise-grade AI performance.   CISO Yashvier Kosaraju walks Jean through their approach to securing agentic AI platforms that serve enterprise customers. Instead of treating AI security as a compliance checkbox, they've built verification pipelines that let customers see exactly what decisions the AI is making and adjust configurations in real-time.   But the biggest operational win isn't replacing security analysts: it's eliminating query languages entirely. Natural language processing now lets incident responders ask direct questions like "show me when Yash logged into his laptop over the last 90 days" instead of learning vendor-specific syntax. This cuts incident response time while making it easier to onboard new team members and switch between security tools without retraining.    Topics discussed:   Reframing zero trust as explicit and continuously verified trust rather than eliminating trust entirely from security architectures. Building contractual frameworks with LLM providers to prevent customer data from training generic models in enterprise AI deployments. Implementing verification pipelines and feedback loops that allow customers to review AI decisions and adjust agentic configurations. Using natural language processing to eliminate vendor-specific query languages during incident response and security investigations. Managing security culture across multicultural organizations through physical presence and collaborative problem-solving approaches rather than enforcement. Addressing shadow AI adoption by understanding underlying problems employees solve instead of punishing policy violations. Implementing shared responsibility models for AI data security across LLM providers, platform vendors, and enterprise customers. Prioritizing internal employee authentication and enterprise security basics in startup scaling patterns from zero to hundred employees.
    --------  
    20:40
  • EP 20 — MoonPay's Doug Innocenti on The Gut Instinct Gap in AI Security Operations
    What happens when you scale a crypto company across 160+ countries while maintaining the same security standards as Wells Fargo? At MoonPay, it meant rethinking how traditional banking security translates to high-velocity fintech environments. Doug Innocenti, CISO, breaks down how his team achieved PCI, SOC 2 Type 2, and regulatory licenses like BitLicense and MiCA without slowing product development. The secret is the ability to test multiple security tools in parallel and pivot quickly when something isn't working.   But velocity alone isn't enough, he cautions Jean. Doug's approach to AI in security reveals a critical insight: although AI-powered tools can dramatically reduce SOC response times and automate incident analysis, the "gut instinct gap" remains. His team uses AI to enable faster decisions, not replace human judgment — especially when patterns don't match what the algorithms expect to see.    Topics discussed:   Maintaining bank-level security posture while enabling startup velocity through security-first architecture and platform design principles. Scaling compliance across 160+ countries using pre-built infrastructure that accommodates PCI, SOC 2, BitLicense, and MiCA requirements. Implementing parallel security tool testing to accelerate vendor evaluation and avoid bureaucratic delays in enterprise environments. Adopting next-generation DLP solutions like DoControl that use AI-powered business intelligence for dynamic data boundary creation. Balancing insider threat monitoring with external threat defense through compensated controls and rapid reaction capabilities. Managing AI adoption risks while embracing acceleration benefits through defensive technology investment and vendor selection criteria. Using AI-enhanced SOC and SIEM operations to reduce incident response times while preserving human judgment for pattern recognition. Building transparent security culture where all employees become security professionals rather than maintaining background security operations.
    --------  
    22:36

Más podcasts de Tecnología

Acerca de Future of Data Security

Welcome to Future of Data Security, the podcast where industry leaders come together to share their insights, lessons, and strategies on the forefront of data security. Each episode features in-depth interviews with top CISOs and security experts who discuss real-world solutions, innovations, and the latest technologies that are shaping the future of cybersecurity across various industries. Join us to gain actionable advice and stay ahead in the ever-evolving world of data security.
Sitio web del podcast

Escucha Future of Data Security, Acquired y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app

Future of Data Security: Podcasts del grupo

Aplicaciones
Redes sociales
v7.23.11 | © 2007-2025 radio.de GmbH
Generated: 11/1/2025 - 11:06:19 PM