llm-platform-security / SecGPT
An Execution Isolation Architecture for LLM-Based Agentic Systems
☆66Updated last month
Alternatives and similar repositories for SecGPT:
Users that are interested in SecGPT are comparing it to the libraries listed below
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Bench☆59Updated last month
- ☆40Updated last month
- Benchmark data from the article "AutoPT: How Far Are We from End2End Automated Web Penetration Testing?"☆12Updated 4 months ago
- The automated prompt injection framework for LLM-integrated applications.☆187Updated 6 months ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆23Updated 10 months ago
- LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins☆25Updated 7 months ago
- 🪐 A Database of Existing Security Vulnerabilities Patches to Enable Evaluation of Techniques (single-commit; multi-language)☆38Updated 2 years ago
- Agent Security Bench (ASB)☆66Updated last week
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses☆180Updated 2 months ago
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆86Updated 5 months ago
- This repo contains the codes of the penetration test benchmark for Generative Agents presented in the paper "AutoPenBench: Benchmarking G…☆23Updated 5 months ago
- ☆36Updated 5 months ago
- ☆55Updated 8 months ago
- A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)☆57Updated 2 months ago
- ☆87Updated 3 weeks ago
- ☆29Updated 7 months ago
- ☆41Updated 3 weeks ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆464Updated 6 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆269Updated last year
- LLM security and privacy☆48Updated 5 months ago
- AIBugHunter: A Practical Tool for Predicting, Classifying and Repairing Software Vulnerabilities☆39Updated 11 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆63Updated 11 months ago
- CS-Eval is a comprehensive evaluation suite for fundamental cybersecurity models or large language models' cybersecurity ability.☆39Updated 4 months ago
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.☆56Updated 4 months ago
- SecLLMHolmes is a generalized, fully automated, and scalable framework to systematically evaluate the performance (i.e., accuracy and rea…☆53Updated 4 months ago
- ☆25Updated last year
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆43Updated last week
- ☆33Updated 8 months ago
- TensorFlow API analysis tool and malicious model detection tool☆25Updated last month
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆115Updated last week