NetSPI / Open-LLM-Security-BenchmarkLinks
☆15Updated 7 months ago
Alternatives and similar repositories for Open-LLM-Security-Benchmark
Users that are interested in Open-LLM-Security-Benchmark are comparing it to the libraries listed below
Sorting:
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆68Updated last week
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆123Updated 7 months ago
- ☆61Updated 2 weeks ago
- ☆45Updated this week
- Tree of Attacks (TAP) Jailbreaking Implementation☆114Updated last year
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆75Updated 3 months ago
- source code for the offsecml framework☆41Updated last year
- Payloads for Attacking Large Language Models☆92Updated 2 months ago
- A knowledge source about TTPs used to target GenAI-based systems, copilots and agents☆43Updated 2 weeks ago
- All things specific to LLM Red Teaming Generative AI☆28Updated 9 months ago
- An interactive CLI application for interacting with authenticated Jupyter instances.☆53Updated 3 months ago
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆27Updated 7 months ago
- Integrate PyRIT in existing tools☆29Updated 5 months ago
- A LLM explicitly designed for getting hacked☆157Updated 2 years ago
- AI-Powered, Local Pythonic Coding Agent 🐞💻☆24Updated 5 months ago
- Verizon Burp Extensions: AI Suite☆132Updated 3 months ago
- ☆16Updated last year
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆164Updated last year
- Reference notes for Attacking and Defending Generative AI presentation☆64Updated last year
- ☆17Updated 3 months ago
- ☆91Updated 2 months ago
- Payloads for AI Red Teaming and beyond☆239Updated 2 weeks ago
- Data Scientists Go To Jupyter☆65Updated 5 months ago
- A utility to inspect, validate, sign and verify machine learning model files.☆57Updated 6 months ago
- [IJCAI 2024] Imperio is an LLM-powered backdoor attack. It allows the adversary to issue language-guided instructions to control the vict…☆42Updated 5 months ago
- A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injection☆261Updated 3 months ago
- General research for Dreadnode☆23Updated last year
- Multi-Lingual GenAI Red Teaming Tool☆27Updated last year
- A research project to add some brrrrrr to Burp☆183Updated 5 months ago
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆76Updated 2 months ago