llm-platform-security / SecGPT
An Execution Isolation Architecture for LLM-Based Agentic Systems
☆70Updated 2 months ago
Alternatives and similar repositories for SecGPT:
Users that are interested in SecGPT are comparing it to the libraries listed below
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Bench☆65Updated 2 weeks ago
- ☆46Updated last month
- Agent Security Bench (ASB)☆75Updated 3 weeks ago
- ☆93Updated last month
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆33Updated last week
- Benchmark data from the article "AutoPT: How Far Are We from End2End Automated Web Penetration Testing?"☆13Updated 5 months ago
- This repository provides a benchmark for prompt Injection attacks and defenses☆188Updated this week
- CS-Eval is a comprehensive evaluation suite for fundamental cybersecurity models or large language models' cybersecurity ability.☆39Updated 4 months ago
- 🤖🛡️🔍🔒🔑 Tiny package designed to support red teams and penetration testers in exploiting large language model AI solutions.☆23Updated 11 months ago
- ☆111Updated 9 months ago
- 🪐 A Database of Existing Security Vulnerabilities Patches to Enable Evaluation of Techniques (single-commit; multi-language)☆38Updated last week
- This repo contains the codes of the penetration test benchmark for Generative Agents presented in the paper "AutoPenBench: Benchmarking G…☆26Updated 6 months ago
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.☆59Updated 4 months ago
- ☆59Updated 9 months ago
- A comprehensive local Linux Privilege-Escalation Benchmark☆32Updated 4 months ago
- [USENIX Security'24] Official repository of "Making Them Ask and Answer: Jailbreaking Large Language Models in Few Queries via Disguise a…☆86Updated 6 months ago
- ☆38Updated 6 months ago
- ☆25Updated last year
- SecLLMHolmes is a generalized, fully automated, and scalable framework to systematically evaluate the performance (i.e., accuracy and rea…☆55Updated 5 months ago
- Code snippets to reproduce MCP tool poisoning attacks.☆78Updated last week
- ☆59Updated 5 months ago
- ☆67Updated last month
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆477Updated 7 months ago
- LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins☆25Updated 8 months ago
- The automated prompt injection framework for LLM-integrated applications.☆198Updated 7 months ago
- A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)☆66Updated 3 months ago
- Repository for "SecurityEval Dataset: Mining Vulnerability Examples to Evaluate Machine Learning-Based Code Generation Techniques" publis…☆67Updated last year
- AIBugHunter: A Practical Tool for Predicting, Classifying and Repairing Software Vulnerabilities☆40Updated last year
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆65Updated last year
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆44Updated last month