llm-platform-security / SecGPT
SecGPT: An execution isolation architecture for LLM-based systems
☆49Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for SecGPT
- LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins☆25Updated 3 months ago
- Agent Security Bench (ASB)☆40Updated 2 weeks ago
- ☆11Updated 3 weeks ago
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆56Updated last month
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses☆146Updated 2 months ago
- ☆22Updated last month
- The automated prompt injection framework for LLM-integrated applications.☆163Updated 2 months ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆65Updated this week
- LLM security and privacy☆41Updated last month
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆25Updated 3 weeks ago
- ☆63Updated this week
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.☆44Updated last week
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆47Updated 7 months ago
- [NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"☆59Updated 3 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆403Updated last month
- Papers about red teaming LLMs and Multimodal models.☆78Updated last month
- SecLLMHolmes is a generalized, fully automated, and scalable framework to systematically evaluate the performance (i.e., accuracy and rea…☆39Updated 2 weeks ago
- Repository for "SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques" publis…☆55Updated last year
- ☆38Updated 4 months ago
- ☆40Updated 6 months ago
- ☆96Updated 4 months ago
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆110Updated 2 months ago
- ☆29Updated last month
- ☆36Updated this week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆107Updated 8 months ago
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆93Updated last month
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆16Updated 6 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆233Updated 9 months ago
- ☆39Updated 9 months ago
- Risks and targets for assessing LLMs & LLM vulnerabilities☆25Updated 5 months ago