PodcastsTecnologíaMLOps.community

MLOps.community

Demetrios
MLOps.community
Último episodio

513 episodios

  • MLOps.community

    The Modern Software Engineer

    14/04/2026 | 53 min
    This episode is brought to you by the MLflow team. Check out more information at MLflow.org.
    Mihail Eric is Head of AI at Monaco and Adjunct Lecturer at Stanford University, where he teaches CS146S: "The Modern Software Developer" — the first course in the world dedicated to how AI is transforming every stage of the software development lifecycle. With 12+ years building production AI systems at Amazon Alexa, Storia AI (YC S24), and early-stage startups, Mihail has one of the most grounded, practitioner-level takes on what it actually means to be a software engineer in 2026.
    The Modern Software Engineer // MLOps Podcast #370 with Mihail Eric, Head of AI at Monaco
    🧠 What the modern software engineer actually looks like — why the job description has fundamentally shifted from writing code to designing systems and directing agents
    ⚙️ Agents require more thinking, not less — why the engineers getting the most out of coding agents are the ones who invest the most upfront in architecture, planning, and codebase structure
    🎓 Inside Stanford's "Modern Software Developer" course — what Mihail teaches in the first CS course in the world focused entirely on AI-transformed software development
    🏗️ From writing code to designing systems — how the best developers are repositioning themselves as architects of agentic workflows rather than line-by-line coders
    🔁 The Build System: how to run agents at scale — practical lessons from building multi-agent pipelines, parallel subagent batches, and automated retrospectives📉 What junior engineers should actually focus on — the skills that remain irreplaceable and the paths that still produce strong software engineers in an AI-first world
    🚀 Building Monaco's AI-native revenue engine — what it's like building AI infrastructure for a fast-moving $35M-funded startup disrupting enterprise CRM
    🎯 How to ace AI engineering interviews — Mihail's framework for demonstrating real AI engineering competence beyond prompt engineering basics. Essential watching for software engineers, ML practitioners, and engineering managers who want an honest, practitioner-level view of where the profession is going — from someone who's both teaching it at Stanford and building it in production.

    🔗 Links & Resources
    Mihail Eric on LinkedIn: https://www.linkedin.com/in/mihaileric/
    Mihail's website: https://www.mihaileric.com
    Stanford course "The Modern Software Developer": https://themodernsoftware.dev/
    Maven course — AI Software Development: From First Prompt to Production Code: https://maven.com/the-modern-software-developer/ai-course
    Free AI Engineer interview prep course: https://course.aiengineermastery.com/
    Monaco (AI-native revenue engine): https://monaco.com
    MLOps.community Slack: https://go.mlops.community/slack

    ⏱️ Timestamps
    00:00 Intro — Mihail Eric & Monaco
    04:00 What has actually changed for software engineers in 2026
    09:00 Inside Stanford's "Modern Software Developer" course
    15:00 Why agents require more human thinking, not less
    21:00 From writing code to designing systems — the architect mindset
    27:00 The Build System: running agents at scale in production
    33:00 What junior engineers should focus on right now
    39:00 Building AI infrastructure at Monaco
    44:00 How to demonstrate real AI engineering competence
    49:00 Skills that will remain irreplaceable
    52:00 Rapid fire/closing thoughts
  • MLOps.community

    We Cut LLM Latency by 70% in Production

    10/04/2026 | 1 h 5 min
    Maher Hanafi is an engineering leader who went from zero AI experience to self-hosting LLMs at enterprise scale — managing GPU costs, optimizing inference with TensorRT LLM, and building an AI platform for HR tech. In this conversation, he breaks down exactly how his team cut latency by 70%, reduced GPU spend through counterintuitive scaling strategies, and navigated the messy reality of taking AI from proof-of-concept to production.

    How We Cut LLM Latency 70% With TensorRT in Production // MLOps Podcast #369 with Maher Hanafi, SVP of Engineering at Betterworks

    Key topics covered:
    The AI Iceberg — Why the invisible work behind AI (performance, latency, throughput, cost, accuracy) is harder than building the features themselves
    GPU Cost Optimization — How upgrading to more expensive GPUs actually saved money by reducing total runtime hours
    TensorRT LLM Deep Dive — Rewiring neural networks to match GPU architecture for 50-70% latency reduction
    Cold Start Solutions — Using AWS FSx, baking models into container images, and cutting minutes off spin-up times
    KV Cache & In-Flight Batching — Why using one model per GPU with maximum KV cache beats cramming multiple models together
    Scheduled & Dynamic Scaling — Pattern-based scaling for HR tech workloads (nights, weekends, end-of-quarter spikes)
    Verticalized AI Platform — Building horizontal AI infrastructure that serves multiple HR product verticals
    AI Engineering Lab — How junior vs. senior engineers adopted AI coding tools differently, and the cultural shift that followed
    Agentic Coding in Practice — Navigating AI coding agent costs, quality control, and redefining the SDLC
    Chinese Models & Compliance — Why enterprise customers block DeepSeek/Qwen and the geopolitics of model training data

    This episode is for engineering leaders building AI in production, MLOps engineers optimizing GPU infrastructure, and anyone navigating the gap between AI demos and enterprise-scale deployment.

    Links & Resources:
    TensorRT LLM: https://github.com/NVIDIA/TensorRT-LLM
    NVIDIA Run: ai Model Streamer (cold start optimization): https://developer.nvidia.com/blog/reducing-cold-start-latency-for-llm-inference-with-nvidia-runai-model-streamer/
    vLLM vs TensorRT-LLM comparison: https://northflank.com/blog/vllm-vs-tensorrt-llm-and-how-to-run-them

    Timestamps:
    [00:00] Optimizing GPU Usage and Latency
    [00:21] Learning AI as Leadership
    [04:34] AI Cost Centers
    [13:56] Throughput and Infrastructure Efficiency
    [18:10] Scaling and Unit Economics
    [24:14] Championing AI ROI
    [36:11] Queue to Value Engine
    [41:30] Failed Product Features
    [46:12] Agentic Engineering Costs
    [58:49] AI Self-Hosting in Engineering
    [1:04:40] Wrap up
  • MLOps.community

    Getting Humans Out of the Way: How to Work with Teams of Agents

    07/04/2026 | 50 min
    Rob Ennals is a Staff Software Engineer at Uber, working on large-scale distributed systems and core backend infrastructure.

    Getting Humans Out of the Way: How to Work with Teams of Agents // MLOps Podcast #368 with Rob Ennals, the Creator of Broomy

    Join the Community: https://go.mlops.community/YTJoinIn
    Get the newsletter: https://go.mlops.community/YTNewsletter
    MLOps GPU Guide: https://go.mlops.community/gpuguide

    // Abstract
    Most people cripple coding agents by micromanaging them—reviewing every step and becoming the bottleneck.

    The shift isn’t to better supervise agents, but to design systems where they work well on their own: parallelized, self-validating, and guided by strong processes.

    Done right, you don’t lose control—you gain leverage. Like paving roads for cars, the real unlock is reshaping the environment so AI can move fast.

    // Bio
    Rob Ennals is the creator of Broomy, an open-source IDE designed for working effectively with many agents in parallel. He previously worked at Meta, Quora, Google Search, and Intel Research. He has a PhD in Computer Science from the University of Cambridge.

    // Related Links
    Website: https://robennals.org/
    https://broomy.org/
    https://learnai.robennals.org/ (not yet announced, but should be by the time of the podcast)

    ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~
    Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore
    Join our Slack community [https://go.mlops.community/slack]
    Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)]
    Sign up for the next meetup: [https://go.mlops.community/register]
    MLOps Swag/Merch: [https://shop.mlops.community/]

    Connect with Demetrios on LinkedIn: /dpbrinkm
    Connect with Rob on LinkedIn: /robennals/

    Timestamps:
    [00:00] Agent Optimization Strategies
    [00:21] Visual Regression Explanation
    [05:35] Automated QA for Videos
    [13:05] Verification System Design
    [19:48] Agent Selection Strategies
    [30:48] Parallel Agent Management
    [35:30] Containerization and Cost Estimation
    [42:48] Shifting to Agent Orchestration
    [50:10] Wrap up
  • MLOps.community

    Fixing GPU Starvation in Large-Scale Distributed Training

    03/04/2026 | 52 min
    Kashish Mittal is a Staff Software Engineer at Uber, working on large-scale distributed systems and core backend infrastructure.

    Fixing GPU Starvation in Large-Scale Distributed Training // MLOps Podcast #367 with Kashish Mittal, Staff Software Engineer at Uber

    Join the Community: https://go.mlops.community/YTJoinIn
    Get the newsletter: https://go.mlops.community/YTNewsletter
    MLOps GPU Guide: https://go.mlops.community/gpuguide

    // Abstract
    Kashish zooms out to discuss a universal industry pattern: how infrastructure—specifically data loading—is almost always the hidden constraint for ML scaling.

    The conversation dives deep into a recent architectural war story. Kashish walks through the full-stack profiling and detective work required to solve a massive GPU starvation bottleneck. By redesigning the Petastorm caching layer to bypass CPU transformation walls and uncovering hidden distributed race conditions, his team boosted GPU utilization to 60%+ and cut training time by 80%. Kashish also shares his philosophy on the fundamental trade-offs between latency and efficiency in GPU serving.

    // Bio
    Kashish Mittal is a Staff Software Engineer at Uber, where he architects the hyperscale machine learning infrastructure that powers Uber’s core mobility and delivery marketplaces. Prior to Uber, Kashish spent nearly a decade at Google building highly scalable, low-latency distributed ML systems for flagship products, including YouTube Ads and Core Search Ranking. His engineering expertise lies at the intersection of distributed systems and AI—specifically focusing on large-scale data processing, eliminating critical I/O bottlenecks, and maximizing GPU efficiency for petabyte-scale training pipelines. When he isn't hunting down distributed race conditions, he is a passionate advocate for open-source architecture and building reproducible, high-throughput ML systems.

    // Related Links
    Website: https://www.uber.com/
    Getting Humans Out of the Way: How to Work with Teams of Agents // MLOps Podcast #368 with Rob Ennals, the Creator of Broomy: https://www.youtube.com/watch?v=ie1M8p-SVfM

    ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~
    Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore
    Join our Slack community [https://go.mlops.community/slack]
    Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)]
    Sign up for the next meetup: [https://go.mlops.community/register]
    MLOps Swag/Merch: [https://shop.mlops.community/]

    Connect with Demetrios on LinkedIn: /dpbrinkm
    Connect with Kashish on LinkedIn: /kashishmittal/

    Timestamps:
    [00:00] Local dataset caching
    [00:30] Engineers Evolving Roles
    [04:44] GPU Resource Management
    [10:21] GPU Utilization Issues
    [21:49] More GPU War Stories
    [32:12] Model Serving Issues
    [39:58] Reflective Learning in Coding
    [43:23] Workflow and Reflective Skills
    [52:30] Wrap up
  • MLOps.community

    Spec Driven Development, Workflows, and the Recent Coding Agent Conference

    31/03/2026 | 59 min
    Jens Bodal is a Senior Software Engineer II working independently, focusing on backend systems, software architecture, and building scalable solutions across client projects.

    This One Shift Makes Developers Obsolete // MLOps Podcast #366 with Jens Bodal, Senior Software Engineer II, Independent

    Join the Community: https://go.mlops.community/YTJoinIn
    Get the newsletter: https://go.mlops.community/YTNewsletter
    MLOps GPU Guide: https://go.mlops.community/gpuguide

    // Abstract
    AI agents are shifting the role of developers from writing code to defining intent. This conversation explores why specs are becoming more important than implementation, what breaks in real-world systems, and how engineering teams need to rethink workflows in an agent-driven world.

    // Bio
    Jens Bodal is a senior software engineer based in Edmonds, Washington, with nine years of experience building developer tooling, internal platforms, and web infrastructure. He spent seven years as an SDE II at Amazon, working on teams including Amazon Games Studio and the AWS Events Management Platform. His work has focused on developer tooling, CI/CD systems, testing infrastructure, and improving the developer experience for teams operating production services. He is particularly interested in developer experience and the growing ecosystem of local tools that help engineers build and run AI systems on infrastructure they control.

    // Related Links
    Website: https://bodal.devhttps://github.com/jensbodal
    https://www.youtube.com/watch?v=Yp7LYdbOuwE

    ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~
    Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExplore
    Join our Slack community [https://go.mlops.community/slack]
    Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)]
    Sign up for the next meetup: [https://go.mlops.community/register]
    MLOps Swag/Merch: [https://shop.mlops.community/]

    Connect with Demetrios on LinkedIn: /dpbrinkm
    Connect with Jens on LinkedIn: /jensbodal

    Timestamps:
    [00:00] Specification vs Code
    [00:25] Conference Realizations and Insights
    [09:01] Agents and Orchestration Insights
    [10:39] Coding Agents and Talent
    [18:10] Sub-agent Design Concepts
    [25:18] Evaling on Vibes
    [33:23] Walled Garden and Proxies
    [41:48] Spec-Driven Development Limitations
    [46:56] Code Ownership vs Authorship
    [50:49] Engineering Ownership and PMs
    [53:47] Skill Creation and Iteration
    [58:40] Wrap up

Más podcasts de Tecnología

Acerca de MLOps.community

Relaxed Conversations around getting AI into production, whatever shape that may come in (agentic, traditional ML, LLMs, Vibes, etc)
Sitio web del podcast

Escucha MLOps.community, Lex Fridman Podcast y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app

MLOps.community: Podcasts del grupo

Aplicaciones
Redes sociales
v8.8.10| © 2007-2026 radio.de GmbH
Generated: 4/17/2026 - 1:53:26 PM