BishopFox / BrokenHillLinks
A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)
☆155Updated last year
Alternatives and similar repositories for BrokenHill
Users that are interested in BrokenHill are comparing it to the libraries listed below
Sorting:
- ☆137Updated this week
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆92Updated last week
- Integrate PyRIT in existing tools☆46Updated 11 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆117Updated last year
- source code for the offsecml framework☆44Updated last year
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆83Updated 9 months ago
- Verizon Burp Extensions: AI Suite☆142Updated 9 months ago
- ☆82Updated last month
- A modular framework for benchmarking LLMs and agentic strategies on security challenges across HackTheBox, TryHackMe, PortSwigger Labs, C…☆193Updated this week
- AI / LLM Red Team Field Manual & Consultant’s Handbook☆229Updated last week
- A research project to add some brrrrrr to Burp☆197Updated 11 months ago
- Payloads for Attacking Large Language Models☆119Updated 3 weeks ago
- Payloads for AI Red Teaming and beyond☆314Updated 5 months ago
- A LLM explicitly designed for getting hacked☆167Updated 2 years ago
- https://arxiv.org/abs/2412.02776☆67Updated last year
- Reference notes for Attacking and Defending Generative AI presentation☆69Updated last year
- using ML models for red teaming☆45Updated 2 years ago
- An interactive CLI application for interacting with authenticated Jupyter instances.☆55Updated 8 months ago
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆166Updated 2 years ago
- All things specific to LLM Red Teaming Generative AI☆29Updated last year
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆35Updated last year
- AIGoat: A deliberately Vulnerable AI Infrastructure. Learn AI security through solving our challenges.☆259Updated 4 months ago
- Agentic pentest tooling☆126Updated this week
- ☆44Updated last year
- A collection of servers which are deliberately vulnerable to learn Pentesting MCP Servers.☆217Updated last month
- Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks☆93Updated 8 months ago
- An example vulnerable app that integrates an LLM☆26Updated last year
- A knowledge source about TTPs used to target GenAI-based systems, copilots and agents☆133Updated last month
- Code snippets to reproduce MCP tool poisoning attacks.☆192Updated 9 months ago
- ☆22Updated last year