PodcastsEconomía y empresaInspire AI: Transforming RVA Through Technology and Automation

Inspire AI: Transforming RVA Through Technology and Automation

AI Ready RVA
Inspire AI: Transforming RVA Through Technology and Automation
Último episodio

68 episodios

  • Inspire AI: Transforming RVA Through Technology and Automation

    Ep 66 - From Prompts To Process: Building Trustworthy AI Workflows w/ Tianzhen Lin

    16/2/2026 | 32 min
    Send a text
    When intelligence is everywhere but correctness is scarce, how do we lead without cutting corners? We sit down with Tianzhen (Tangent) Lin—veteran engineer and systems thinker—to unpack a practical, durable approach to building AI‑assisted products that hold up under pressure. No hype, no shortcuts: just the patterns that make teams faster and safer at the same time.

    We start by reframing large language models as “eager interns”: fast, helpful, and prone to saying yes. That mental model shifts responsibility back where it belongs—on leaders who must design workflows that surface assumptions, constrain degrees of freedom, and verify outcomes. Tangent explains why context remains a finite resource even with giant windows and how the “lost in the middle” effect undermines long prompts. The fix isn’t more chat; it’s better scaffolding. Specs, plans, and documentation become the backbone for repeatable success because they compress what matters and travel across sessions and teammates.

    From there, we dig into decomposition as a risk strategy. Breaking work into small, testable steps gives you early checkpoints to catch hallucinated requirements, unsafe libraries, or performance traps—like UI freezes from naive million‑row operations. Tangent shares a late‑night pivot where a strong, technology‑agnostic spec let the team re‑architect in hours, not days, turning a potential rewrite into a near‑seamless transition. We dive into verification as a non‑negotiable, the value of documentation as compressed context, and how institutional knowledge prevents the “sandcastle effect” when requirements shift or the tide comes in.

    The result is a playbook for leaders navigating an AI‑accelerated world: treat context like budget, invest in durable artifacts, decompose to control risk, and verify relentlessly. Do that well and AI stops being a confident amateur and starts acting like a reliable teammate. If you’re serious about trust, safety, and scalable speed, this conversation will sharpen your judgment and strengthen your systems. Subscribe, share with a teammate who ships software, and leave a review with the one workflow change you’ll make this week.
    Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.
  • Inspire AI: Transforming RVA Through Technology and Automation

    Ep 65 - LLM-as-a-Judge: Evaluations That Scale

    09/2/2026 | 15 min
    Send a text
    What if your AI had a never-tired reviewer that caught quiet errors before they reached customers? We dive into LLM-as-judge—the simple but powerful pattern where one model generates and another evaluates—to show how leaders can scale quality without surrendering standards. From summaries that must capture the one sentence that matters to support answers that need to be grounded, safe, and on-brand, we break down where this approach shines and where it can fail you.

    We get practical with three evaluation formats—single-answer grading, pairwise comparisons, and reference-guided checks—and explain why ranking often beats raw scoring for stability. Then we map the biggest failure modes: confident nonsense that looks authoritative, biases you never asked for, and the danger of outsourcing values to a model’s defaults. The fix is leadership: define what good means, encode it in a rubric with clear anchors, and validate against human judgment before trusting the system.

    You’ll hear step-by-step patterns you can run next week: build a rubric with accuracy, groundedness, clarity, tone, safety, and actionability; use pairwise comparisons for model or draft selection; enable “jury mode” by aggregating multiple judgments; and force citations to specific source passages for verification over vibes. We also show how specialized judges—for factuality, tone, and compliance—reduce noise and improve reliability, and how monitoring helps you detect drift, compare model upgrades, and standardize quality across teams.

    If you’re ready to move from “we sometimes use AI” to “we operate AI inside a quality system,” this conversation gives you the mental models and playbooks to start. Subscribe, share with a teammate who ships AI features, and leave a review with one value you’d encode in your rubric.
    Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.
  • Inspire AI: Transforming RVA Through Technology and Automation

    Ep 64 - Intelligence, Accountability, And You: From AI Slop to Sound Judgement

    02/2/2026 | 13 min
    Send us a text
    The pace of AI can feel exhilarating until a polished report collapses under scrutiny and your team spends hours repairing “work slop.” We’re seeing a quiet shift across organizations: as intelligence becomes ambient, leadership’s edge moves from gathering information to evaluating it. That shift changes how we make calls, how we manage risk, and how we design trust into everyday workflows.

    We unpack practical decision hygiene that keeps speed from steamrolling substance. Treat AI outputs as drafts, not verdicts; verify facts, pressure-test conclusions, and define what “done” really means so polish doesn’t masquerade as insight. We share question prompts to expose missing data and faulty assumptions, and we draw clear lines between decision support and decision replacement—because confidence is not correctness, and accountability cannot be delegated to an algorithm.

    We then move into risk management where leaders operate as the safety net between model outputs and real-world consequences. From finance to healthcare to marketing, we outline why high-stakes decisions demand human in the loop and how to establish reviews, stress tests, and override paths without smothering speed. You don’t need to build models to lead well; you need to know where they break, how bias creeps in, and which failure modes matter for money, health, fairness, and reputation.

    Finally, we design for trust. Adoption accelerates when people know where AI is used, who stays accountable, and how decisions align with values. We explore transparency, explainability, and psychological safety so teams feel augmented rather than quietly judged or replaced. The throughline is simple: AI can generate options, but it can’t weigh meaning or carry consequence. That’s your job. If you’re ready to turn ambient intelligence into durable advantage, join us and upgrade your role to evaluator in chief.

    Enjoy the conversation? Follow the show, share with a colleague, and leave a quick review—then tell us the one change you’ll make to improve AI evaluation on your team.
    Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.
  • Inspire AI: Transforming RVA Through Technology and Automation

    Ep 63 - Human In The Loop: Designing The Boundary Between Machines And Humans

    26/1/2026 | 10 min
    Send us a text
    The moment an AI agent can issue refunds or change accounts, the conversation shifts from capability to responsibility. We dig into how to design trust between people and machines by choosing the right oversight model for the job: human in the loop for high-stakes decisions and human on the loop for fast, high-volume work. Along the way, we unpack concrete playbooks for customer service leaders and operators who need speed without sacrificing judgment.

    We start by drawing a clear line between decision-time approval and supervisory control, then show how confidence-based escalation creates dynamic autonomy. Instead of all-or-nothing automation, we use signals like model confidence, customer sentiment, value at risk, and ambiguity to route actions for auto-resolution or human review. We also break down synchronous versus asynchronous oversight, and why advanced teams separate planning (human approved) from execution (AI driven) to combine safety with scale.

    The examples ground the theory: a retailer that automated 40 percent of inquiries while escalating emotionally charged cases, an airline that trained its system through human corrections before handing off routine tickets, and insurers that pay clean claims instantly while auditing edge cases. You’ll hear a pragmatic checklist for safe scaling: map risk before tasks, set thresholds, give reviewers explanations, log everything, prevent automation bias, and train people to be AI supervisors. The goal isn’t to remove humans; it’s to elevate them—letting AI handle speed and repetition while humans guard empathy, accountability, and trust.

    Ready to build AI that knows when to ask for help? Subscribe, share this episode with a teammate, and leave a review with your top escalation trigger—we’ll feature the best ideas in a future show.
    Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.
  • Inspire AI: Transforming RVA Through Technology and Automation

    Ep 62 - Reconfiguring Work: A Playbook For Agentic AI Adoption

    19/1/2026 | 16 min
    Send us a text
    When AI stops acting like a tool and starts acting like a teammate, the rules of work change. We explore what agentic AI really means for teams, decisions, and culture—and why the biggest blockers aren’t algorithms but fear, fatigue, and unclear purpose. Instead of chasing pilots that never scale, we walk through a practical, people-first playbook anchored in outcomes, trust, and daily usefulness.

    We break down battle-tested frameworks leaders are using right now: McKinsey’s North Star and reconfigured work model, BCG’s five must‑haves for AI upskilling, and Mercer’s human‑plus‑agent operating system. Along the way, we dive into candid case studies: how McKinsey’s “Have you asked Lily?” norm turned AI into habit, and how Bank of America’s “make work easier” principle drove adoption above 90% while strengthening governance. You’ll hear why distributed leadership and peer champions matter more than mandates, how to close the enthusiasm gap with honest communication, and how to design rollouts that reduce friction instead of adding change fatigue.

    If you’re leading transformation, you’ll leave with a Monday morning checklist: define outcomes, build trust with transparent governance, co-create with employees, overinvest in role-based upskilling, model usage from the top, design for daily usefulness, and keep wins visible to sustain momentum. The edge isn’t competing with AI—it’s orchestrating it to amplify human judgment and deliver measurable value. Subscribe, share with a colleague, and tell us: what’s your North Star for agentic AI where you work?
    Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

Más podcasts de Economía y empresa

Acerca de Inspire AI: Transforming RVA Through Technology and Automation

Our mission is to cultivate AI literacy in the Greater Richmond Region through awareness, community engagement, education, and advocacy. In this podcast, we spotlight companies and individuals in the region who are pioneering the development and use of AI.
Sitio web del podcast

Escucha Inspire AI: Transforming RVA Through Technology and Automation, The Diary Of A CEO with Steven Bartlett y muchos más podcasts de todo el mundo con la aplicación de radio.net

Descarga la app gratuita: radio.net

  • Añadir radios y podcasts a favoritos
  • Transmisión por Wi-Fi y Bluetooth
  • Carplay & Android Auto compatible
  • Muchas otras funciones de la app
Aplicaciones
Redes sociales
v8.6.0 | © 2007-2026 radio.de GmbH
Generated: 2/21/2026 - 3:04:43 PM