What do Renaissance poets, Reddit trolls, and your company’s chatbot have in common? They’re all vulnerable to prompt injection. Host Emily Laird breaks down how language alone can hijack your AI systems, no malware, no hoodie, just a well-placed phrase. From direct attacks that rewrite instructions mid-chat to sneaky indirect threats buried in calendar invites and SVG files, Emily exposes the dark magic of prompt injection and why it’s terrifyingly effective. Tune in for a wild ride through multimodal attacks, accidental obedience, and the art of whispering lies to machines trained to listen.
Join the AI Weekly Meetups
Connect with Us: If you enjoyed this episode or have questions, reach out to Emily Laird on LinkedIn. Stay tuned for more insights into the evolving world of generative AI. And remember, you now know more about prompt injection.
Connect with Emily Laird on LinkedIn