llm-platform-security / SecGPTLinks
An Execution Isolation Architecture for LLM-Based Agentic Systems
☆84Updated 6 months ago
Alternatives and similar repositories for SecGPT
Users that are interested in SecGPT are comparing it to the libraries listed below
Sorting:
- The D-CIPHER and NYU CTF baseline LLM Agents built for NYU CTF Bench☆89Updated last week
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆69Updated 2 weeks ago
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents on…☆48Updated last week
- ☆63Updated 2 months ago
- The automated prompt injection framework for LLM-integrated applications.☆220Updated 10 months ago
- An autonomous LLM-agent for large-scale, repository-level code auditing☆186Updated 2 weeks ago
- CS-Eval is a comprehensive evaluation suite for fundamental cybersecurity models or large language models' cybersecurity ability.☆43Updated 8 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆303Updated last year
- A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)☆81Updated 6 months ago
- This repo contains the codes of the penetration test benchmark for Generative Agents presented in the paper "AutoPenBench: Benchmarking G…☆35Updated 3 weeks ago
- This is a dataset intended to train a LLM model for a completely CVE focused input and output.☆63Updated last month
- ☆47Updated 10 months ago
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts☆512Updated 10 months ago
- ☆127Updated last month
- ☆49Updated last week
- The repository of VulnBot: Autonomous Penetration Testing for A Multi-Agent Collaborative Framework.☆86Updated 3 months ago
- This repository provides a benchmark for prompt Injection attacks and defenses☆250Updated 2 weeks ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆226Updated last week
- A comprehensive local Linux Privilege-Escalation Benchmark☆37Updated 2 months ago
- SecLLMHolmes is a generalized, fully automated, and scalable framework to systematically evaluate the performance (i.e., accuracy and rea…☆57Updated 3 months ago
- ☆26Updated 9 months ago
- The repository of paper "HackMentor: Fine-Tuning Large Language Models for Cybersecurity".☆126Updated last year
- ☆70Updated last year
- Benchmark data from the article "AutoPT: How Far Are We from End2End Automated Web Penetration Testing?"☆17Updated 8 months ago
- Agent Security Bench (ASB)☆100Updated last month
- TensorFlow API analysis tool and malicious model detection tool☆33Updated 2 months ago
- LLM security and privacy☆49Updated 9 months ago
- MCPSafetyScanner - Automated MCP safety auditing and remediation using Agents. More info: https://www.arxiv.org/abs/2504.03767☆101Updated 3 months ago
- Code snippets to reproduce MCP tool poisoning attacks.☆164Updated 3 months ago
- ☆25Updated last year