PodcastsNoticiasThe New Stack Podcast

The New Stack Podcast

The New Stack
The New Stack Podcast
Último episodio

385 episodios

  • The New Stack Podcast

    Why Block handed Goose to the Linux Foundation

    15/05/2026 | 19 min
    What began as an internal developer tool atBlockhas evolved into a broader open-source initiative with industry backing. Goose, Block’s AI coding agent, followed a path similar to Amazon’s transformation of internal infrastructure intoAmazon Web Services. After deploying Goose companywide, Block open-sourced the tool under a permissive license, leading to rapid adoption across the developer community.

    But according to Manik Surtani, Office of the CTO, Block and Co Founder of Agentic AI Foundation, early momentum exposed governance challenges. Although Goose was technically open source, Block retained trademark ownership, creating concerns for enterprises seeking truly independent governance. To address this, the team partnered with the creators ofAnthropicand the Model Context Protocol community to establish theAgentic AI Foundationunder the umbrella of theLinux Foundation.

    Goose, MCP, and Agents.MD became the foundation’s initial projects, chosen largely to accelerate the launch of the new organization and create a collaborative ecosystem around agentic AI development.

    Learn more from The New Stack around the latest in open-source AI: 

    Anthropic extends MCP with a UI framework

    Why the Linux Foundation adopted MCP, with Jim Zemlin and Mazin Gilbert

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    Fivetran's CPO: closed data stacks won't survive the agent era

    13/05/2026 | 22 min
    At Google Cloud Next 2026, Fivetran Chief Product Officer Anjan Kundavaram argued that enterprise data systems are unprepared for the scale of AI-driven analytics. Unlike humans, AI agents can generate exponentially more queries, often routing them through the same expensive compute infrastructure. Kundavaram compared it to “using a Lamborghini to mow the lawn.” To address this, Fivetran introduced its “Open Data Infrastructure” vision and a benchmark designed to expose hidden AI workload costs in closed ecosystems.

    Kundavaram said agents can optimize for cost instead of speed, choosing cheaper compute engines when appropriate — but only in open architectures with multiple options. Closed systems force every query through high-cost paths. He also warned that fragmented data and weak context create a “triple whammy” of poor AI responses, soaring analytics bills, and wasted compute. While many organizations respond by tightening controls, Kundavaram argued the better path is investing in open infrastructure, interoperability, and strong semantic data practices before AI costs spiral further.

     

    Learn more from The New Stack around the latest in enterprise data systems: 

    Enterprise AI Success Demands Real-Time Data Platforms

    AI Agents Are Morphing Into the 'Enterprise Operating System'

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    The new FinOps problem isn't cloud bills

    12/05/2026 | 28 min
    At Google Cloud Next 2026, Finout co-founder and CEO Roi Ravhon and Google Cloud FinOps lead Pathik Sharma discussed how FinOps is rapidly evolving for the AI era. Ravhon argued that while cloud FinOps had a decade to mature, AI economics are forcing the industry to adapt within a year. Unlike traditional cloud workloads, AI costs are unpredictable because token usage varies even for identical prompts, while advanced reasoning models consume significantly more tokens despite falling prices.

    Both emphasized that effective AI FinOps requires intelligent orchestration, routing workloads to the cheapest capable models instead of defaulting to expensive frontier models. Sharma noted that AI costs extend beyond APIs to GPUs, storage, training, and organizational adoption. They also cautioned against relying solely on LLMs for operational automation. Deterministic systems, observability metrics, and human approvals remain essential guardrails. Ultimately, both stressed that FinOps is primarily an organizational and cultural discipline, recommending newcomers start with the FinOps Foundation before investing in tools.

    Learn more from The New Stack around the latest in FinOps: 

    Why FinOps Isn’t About Saving Money 

    FinOps Foundation’s FOCUS 1.2 Expands to SaaS, PaaS 

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    How Microsoft is governing thousands of Kubernetes clusters without manual intervention

    07/05/2026 | 25 min
    Managing Kubernetes at fleet scale introduces significant complexity, especially as organizations expand from a few clusters to hundreds or thousands across cloud, on-premises, and edge environments. While GitOps remains the dominant model for declarative management, its traditional one-to-one repository-to-cluster approach struggles to handle multi-cluster realities such as global traffic routing, shared secrets, and unified observability. AsStephane Erbrech, Principal Software Engineer at Microsoftexplains, the challenge shifts from deployment to governance—maintaining consistency, security, and compliance across a vast distributed system without manual intervention.

    This need is amplified by the rise of AI workloads at the edge, where inference is increasingly decentralized. To address these challenges,Microsoft Azure Kubernetes Fleet Managerenables coordinated, staged rollouts across clusters, allowing teams to validate updates in lower-risk environments before production. Supporting this,Cilium Cluster Meshprovides seamless cross-cluster connectivity, enabling workload mobility and efficient resource use, especially for scarce GPU capacity. Together, these tools help modern platform teams manage lifecycle, networking, and orchestration at scale. 

    Learn more from The New Stack around managing Kubernetes at fleet scale: 

    KubeFleet: The Future of Multicluster Kubernetes App Management

    Why Microsoft is betting on temporary identities to stop autonomous agents from going rogue

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
  • The New Stack Podcast

    Why long-running AI agents break on HTTP and how Ably is fixing it

    06/05/2026 | 31 min
    In this episode ofThe New Stack Makers, Matthew O’Riordan, CEO of Ably, explains how infrastructure originally built for human collaboration is now well-suited for long-running AI agents. While Ably initially resisted positioning itself as an AI company, the rise of agents that reason, call tools, and operate over extended periods revealed a natural fit for its real-time communication platform.

    O’Riordan highlights the limitations of HTTP for these use cases. While effective for short, request-response interactions, HTTP struggles with persistent, stateful experiences—such as handling dropped connections, multi-device usage, or mid-task interruptions. To address this, a new “durable session” layer is emerging, enabling continuous synchronization between agents and users through shared state, presence, and recovery mechanisms.

    Ably’s solution, AI Transport, augments existing architectures by keeping HTTP for requests while shifting responses to durable sessions. Features like mutable message streams and “live objects” allow seamless reconnection and collaboration. The goal is to provide a drop-in layer that developers can adopt without rethinking their stack—moving beyond traditional pub/sub models.

    Learn more from The New Stack around Ably and AI Transport: 

    How MCP Uses Streamable HTTP for Real-Time AI Tool Interaction

    Ably Touts Real-Time Starter Kits for Vercel and Netlify

    AI Agents Need Help. Here’s 4 Ways To Ship Software Reliably

    Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Más podcasts de Noticias
Acerca de The New Stack Podcast
The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
Sitio web del podcast

Escucha The New Stack Podcast, Así las cosas con Carlos Loret de Mola y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
The New Stack Podcast: Podcasts del grupo