ReversecLabs / spikeeLinks
☆137Updated this week
Alternatives and similar repositories for spikee
Users that are interested in spikee are comparing it to the libraries listed below
Sorting:
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆155Updated last year
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆92Updated last week
- A modular framework for benchmarking LLMs and agentic strategies on security challenges across HackTheBox, TryHackMe, PortSwigger Labs, C…☆193Updated this week
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆83Updated 9 months ago
- ☆82Updated last month
- Payloads for AI Red Teaming and beyond☆314Updated 5 months ago
- A research project to add some brrrrrr to Burp☆197Updated 11 months ago
- Reference notes for Attacking and Defending Generative AI presentation☆69Updated last year
- AIGoat: A deliberately Vulnerable AI Infrastructure. Learn AI security through solving our challenges.☆259Updated 4 months ago
- ☆241Updated last month
- source code for the offsecml framework☆44Updated last year
- ☆44Updated last year
- A growing collection of MCP servers bringing offensive security tools to AI assistants. Nmap, Ghidra, Nuclei, SQLMap, Hashcat and more.☆205Updated this week
- Verizon Burp Extensions: AI Suite☆142Updated 9 months ago
- Repository for CoSAI Workstream 4, Secure Design Patterns for Agentic Systems☆82Updated 2 weeks ago
- NOVA: The Prompt Pattern Matching☆88Updated last week
- A collection of servers which are deliberately vulnerable to learn Pentesting MCP Servers.☆217Updated last month
- ☆363Updated 4 months ago
- ☆368Updated last month
- Integrate PyRIT in existing tools☆46Updated 11 months ago
- A LLM explicitly designed for getting hacked☆167Updated 2 years ago
- Hands-on MCP security lab: 10 real incidents reproduced with vulnerable/secure MCP servers, pytest regressions, and Claude/Cursor battle-…☆81Updated 2 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆117Updated last year
- Payloads for Attacking Large Language Models☆119Updated 3 weeks ago
- LLM Testing Findings Templates☆75Updated last year
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆166Updated 2 years ago
- CALDERA plugin for adversary emulation of AI-enabled systems☆109Updated 2 years ago
- AI / LLM Red Team Field Manual & Consultant’s Handbook☆229Updated last week
- AI agent for autonomous cyber operations☆467Updated 2 months ago
- A Caldera plugin for the emulation of complete, realistic cyberattack chains.☆60Updated 2 months ago