Aye Aye AI Podcast
In the episode, they delve into the critical security threat of indirect prompt injection—a vulnerability that allows attackers to manipulate GenAI systems using malicious instructions embedded in data like emails or documents.
From the risks of disinformation, phishing, and denial of service to strategies for mitigating these challenges, Chris and Matt share invaluable insights. They also discuss how the integration of large language models (LLMs) into organisational systems expands the attack surface—and why strong safeguards are essential.
💡 Key takeaway: RAG (Retrieval-Augmented Generation) is powerful, but without the right protections, it’s vulnerable.