PodcastsTecnologíaThe AWS Developers Podcast

The AWS Developers Podcast

Amazon Web Services
The AWS Developers Podcast
Último episodio

203 episodios

  • The AWS Developers Podcast

    Neurosymbolic AI: Combining GenAI with Mathematical Proof — with Danilo Poccia

    08/04/2026 | 1 h 7 min
    What if you could combine the creative power of generative AI with the mathematical certainty of formal verification? In this episode, Danilo Poccia — Principal Developer Advocate at AWS — breaks down automated reasoning, a field of AI that has been quietly powering critical AWS services for years and is now becoming essential for production AI systems. We explore why generative AI alone is not enough for high-stakes applications, and how automated reasoning provides mathematical proof — not probabilistic guesses — that your AI agents are following the rules. Danilo traces the roots of automated reasoning back to the 'symbolist' branch of AI, explains how AWS has used it internally for years to verify S3 bucket policies, encryption algorithms, and network configurations, and shows how it now converges with neural networks in what researchers call neurosymbolic AI. On the practical side, we dig into Amazon Bedrock Guardrails with Automated Reasoning checks — the first and only generative AI safeguard that uses formal logic to verify response accuracy. Danilo walks through how developers can use policy verification for agentic systems and tool access control with Cedar, and how AgentCore Gateway fits into the picture for managing MCP-based tool interactions at scale. We also cover the open source landscape: Dafny for verification-aware programming, Lean as a theorem prover, Prolog for logic programming, and the growing ecosystem of MCP servers that bring these capabilities into everyday development workflows. Whether you are building AI agents for production or just curious about what comes after prompt engineering, this conversation will change how you think about AI reliability.
  • The AWS Developers Podcast

    Agent-Native Serverless Development with Shridhar Pandey

    01/04/2026 | 47 min
    In this episode, we sit down with Shridhar Pandey, Principal Product Manager on AWS Serverless Compute, to explore how the serverless team is pioneering agent-native development. Shridhar walks us through a remarkable March 2026 where the team shipped three major capabilities in just three weeks — a Kiro Power for Durable Functions, a Kiro Power for SAM, and a serverless agent plugin now available in Claude Code and Cursor. We trace the journey from 18 months of traditional developer experience improvements — local testing, remote debugging, LocalStack integration — to the realization that AI agents are fundamentally changing how developers build, deploy, and operate serverless applications. The serverless MCP server, now approaching half a million downloads, laid the foundation, and the new agent plugin builds on it with four specialized skills covering Lambda functions, operational best practices, infrastructure as code with SAM and CDK, and durable functions. Shridhar shares his thinking on agent personas — developer agents, operator agents, and platform owner agents — and how the team is applying an 'AX' (agent experience) lens to every feature they ship. We also take a candid detour into how AI has transformed his own work as a product leader: research that took weeks now takes hours, document cycles that spanned days now wrap up in a single sitting, and a fleet of agents handles daily digests and data analysis for the team. Open source runs through everything — the MCP server, the plugin, the public Lambda roadmap on GitHub — and Shridhar invites the community to shape what comes next.
  • The AWS Developers Podcast

    The Hard Lessons of Cloud Migration: inDrive's Path from Monolith to Microservices

    25/03/2026 | 1 h 13 min
    Join us for a fascinating conversation with Alexander 'Sasha' Lisachenko (Software Architect) and Artem Gab (Senior Engineering Manager) from inDrive, one of the global leaders in mobility operating in 48 countries and processing over 8 million rides per day. Sasha and Artem take us through their four-year transformation journey from a monolithic bare-metal setup in a single data center to a fully cloud-native microservices architecture on AWS. They share the hard-earned lessons from their migration, including critical challenges with Redis cluster architecture, the discovery of single-threaded CPU bottlenecks, and how they solved hot key problems using Uber's H3 hexagon-based geospatial indexing. We dive deep into their migration from Redis to Valkey on ElastiCache, achieving 15-20% cost optimization and improved memory efficiency, and their innovative approach to auto-scaling ElastiCache clusters across multiple dimensions. Along the way, they reveal how TLS termination on master nodes created unexpected bottlenecks, how connection storms can cascade when Redis slows down, and why engine CPU utilization is the one metric you should never ignore. This is a story of resilience, technical problem-solving, and the reality of large-scale cloud transformations — complete with rollbacks, late-night incidents, and the eventual triumph of a fully elastic, geo-distributed platform serving riders and drivers across the globe.
  • The AWS Developers Podcast

    Episode 200: Java & Spring AI Are Winning the Enterprise AI Race — with James Ward & Josh Long

    18/03/2026 | 51 min
    It's a milestone — episode 200! And to mark the occasion, we're doing something we've never done before: hosting two guests at the same time. James Ward (Principal Developer Advocate at AWS) and Josh Long (Spring Developer Advocate at Broadcom, Java Champion, and host of 'A Bootiful Podcast') join Romain for a wide-ranging conversation about why Java and Spring AI are becoming the go-to stack for enterprise AI development. We kick off with Spring AI's rapid evolution — from its 1.0 GA release to the just-released 2.0.0-M3 milestone — and why it's far more than an LLM wrapper. James and Josh break down how Spring AI provides clean abstractions across 20+ models and vector stores, with type-safe, compile-time validation that prevents the kind of string-typo failures that plague dynamically typed AI code in production. The numbers back it up: an Azul study found that 62% of surveyed companies are building AI solutions on Java and the JVM. James and Josh explain why — enterprise teams need security, observability, and scalability baked in, not bolted on. We dive into the Agent Skills open standard from Anthropic and James's SkillsJars project for packaging and distributing agent skills via Maven Central. We also cover Spring AI's official Java MCP SDK (now at 1.0) and how MCP and Agent Skills complement each other for building capable, composable agents. The performance story is striking: Java MCP SDK benchmarks show 0.835ms latency versus Python's 26.45ms, 1.5M+ requests per second versus 280K, and 28% CPU utilization versus 94% — with even better numbers using GraalVM native images. Josh and James also walk us through Embabel, the new JVM-based agentic framework from Spring creator Rod Johnson, featuring goal-oriented and utility-based planners with type-safe workflow definitions built on Spring AI foundations. We close with a look at running Spring AI agents on AWS Bedrock AgentCore — memory, browser support, code interpreter, and serverless containers for agentic workloads.
  • The AWS Developers Podcast

    AWS Hero Linda Mohamed: Juggling Cloud, Community & Agentic AI

    11/03/2026 | 1 h 5 min
    Some guests make you want to close your laptop and go build something. Linda Mohamed is one of them. In this episode, Romain sits down with Linda — AWS Community Hero, User Group Leader, Chairwoman of the AWS Community DACH Association, and independent cloud consultant based in Vienna. Linda started as a Java developer in on-premises enterprise environments. Her first AWS touch point? Building an Alexa skill for a smart home product — discovering Lambda almost by accident, and never looking back. Today she's building multi-agent AI systems, running an AI-powered video pipeline with five media customers, and doing it all while being one of the most energetic and generous contributors in the AWS community. Discover Linda's journey from Java developer in telecom to cloud and AI consultant, conference-driven development as a forcing function to ship, building Otto — a multi-agent Slack bot using Crew AI, LoRA fine-tuning, and Amazon Bedrock Agent Core Runtime. Learn about the AI-powered video analysis pipeline she built to solve her own problem and ended up selling to five media customers, vibe coding vs spec-driven development and when each makes sense, and why Clean Code principles still apply when designing agent architectures.

Más podcasts de Tecnología

Acerca de The AWS Developers Podcast

Stay updated on the latest AWS news and insights for developers, wherever you are, whenever you want.
Sitio web del podcast

Escucha The AWS Developers Podcast, Tecnófagos. Devoradores de tecnología. y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app

The AWS Developers Podcast: Podcasts del grupo

Aplicaciones
Redes sociales
v8.8.6| © 2007-2026 radio.de GmbH
Generated: 4/9/2026 - 1:30:31 PM