BSides Delaware 2025 - LLMsec 2025: A Practical Guide to Attacks and Mitigations - 9th Talk
About BSides Delaware Conference
Security BSides Delaware is Delaware’s premier grassroots cybersecurity conference, one of the most enduring security gatherings in the mid-Atlantic. Since 2010 (the 12th ever BSides conference), it has brought together security researchers, practitioners, students, and enthusiasts for knowledge sharing and community building.
Held November 14-15, 2025, at the University of Delaware’s UD Fintech building, this 16th annual conference features technical talks, hands-on workshops, CTF competitions, specialized villages, and extensive networking. BSides Delaware embodies the Security BSides spirit—volunteer-run, non-profit, prioritizing learning and community over commercialization.
Talk: LLMsec 2025: A Practical Guide to Attacks and Mitigations
Talk Overview
LLMs now power business-critical applications (chatbots, developer copilots, security analysis, automated decision-making), bringing new attack surfaces traditional security models don’t address. Unlike conventional AppSec bugs, LLM security challenges arise from their probabilistic nature.
This talk delivered a practical, attacker-focused tour of LLM vulnerabilities: prompt injection, jailbreaks, safety evasion, model extraction, and insecure tool integrations—emphasizing hands-on demonstrations over theory.
Live Demonstrations and Attack Scenarios
Live demos using open-source models (Mistral, Llama) illustrated attacks in realistic environments:
-
Prompt Injection: Direct and indirect injection to bypass restrictions, extract training data, execute unauthorized actions
-
Jailbreaking: Role-playing attacks, encoded payloads, adversarial prompts to bypass safety guardrails
-
RAG Poisoning: Injecting malicious content into vector databases to manipulate responses
-
Tool Execution Vulnerabilities: Tricking function-calling capabilities into unauthorized operations
-
Model Extraction: Extracting proprietary behavior and training data
Each attack paired with real-world context from actual incidents and production breaches.
Defensive Strategies and Mitigations
Actionable strategies: prompt hardening (multi-layer defenses, delimiter separation), input sanitization (injection detection, content filtering), output filtering (validation, PII redaction), context isolation (sandboxing, privilege separation), tool policy enforcement (whitelist-based calling), AI red teaming (OWASP Top 10 for LLMs), and monitoring/telemetry (real-time logging, anomaly detection).
Introduced the LLMSec Demo application—an open-source training platform demonstrating vulnerable vs. defended implementations side-by-side.
Practical Takeaways
Attendees left with hands-on attack knowledge, defense playbooks, testing tools, risk assessment frameworks, and open-source resources—bridging the gap between GenAI hype and production security reality.
Slides can be found here: View Slides