BSides Delaware 2025 - LLMsec 2025: A Practical Guide to Attacks and Mitigations - 9th Talk
Talk : LLMsec 2025: A Practical Guide to Attacks and Mitigations
Large Language Models (LLMs) are now powering business-critical applications—from chatbots and developer copilots to security analysis platforms. This rapid adoption brings new attack surfaces that traditional security models fail to address. This talk delivers a practical, attacker-focused tour of modern LLM vulnerabilities, including prompt injection, jailbreaks, safety evasion, model extraction, and insecure tool/plugin integrations.
Live demos using open-source models will illustrate how these attacks work in realistic environments and how they can be chained for greater impact. We’ll pair each exploit with actionable defensive strategies—such as prompt hardening, input/output filtering, context isolation, and AI red teaming—so attendees leave with the knowledge and tools to secure their own GenAI applications. No prior machine learning expertise is required; this session is built for security professionals on both the offensive and defensive sides.