ReversecLabs / spikeeLinks
☆54Updated last week
Alternatives and similar repositories for spikee
Users that are interested in spikee are comparing it to the libraries listed below
Sorting:
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆122Updated 6 months ago
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆61Updated this week
- Reference notes for Attacking and Defending Generative AI presentation☆64Updated 11 months ago
- ☆40Updated this week
- Tree of Attacks (TAP) Jailbreaking Implementation☆111Updated last year
- source code for the offsecml framework☆41Updated last year
- Integrate PyRIT in existing tools☆28Updated 4 months ago
- A research project to add some brrrrrr to Burp☆181Updated 5 months ago
- LLM Testing Findings Templates☆72Updated last year
- ☆41Updated 7 months ago
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆75Updated 2 months ago
- Verizon Burp Extensions: AI Suite☆131Updated 2 months ago
- Payloads for Attacking Large Language Models☆91Updated last month
- NOVA: The Prompt Pattern Matching☆128Updated 2 months ago
- Data Scientists Go To Jupyter☆63Updated 4 months ago
- An interactive CLI application for interacting with authenticated Jupyter instances.☆53Updated 2 months ago
- CALDERA plugin for adversary emulation of AI-enabled systems☆99Updated last year
- Top 10 for Agentic AI (AI Agent Security) serves as the core for OWASP and CSA Red teaming work☆115Updated last month
- The Arcanum Prompt Injection Taxonomy☆126Updated 2 months ago
- A very simple open source implementation of Google's Project Naptime☆159Updated 3 months ago
- A powerful tool that leverages AI to automatically generate comprehensive security documentation for your projects☆89Updated 2 months ago
- AIGoat: A deliberately Vulnerable AI Infrastructure. Learn AI security through solving our challenges.☆237Updated 2 months ago
- ☆274Updated last week
- An experimental project exploring the use of Large Language Models (LLMs) to solve HackTheBox machines autonomously.☆57Updated 2 months ago
- A LLM explicitly designed for getting hacked☆152Updated last year
- ☆16Updated last year
- AI-powered bug hunter - vscode plugin.☆36Updated 10 months ago
- using ML models for red teaming☆43Updated last year
- An experimental project using LLM technology to generate security documentation for Open Source Software (OSS) projects☆31Updated 4 months ago
- A Caldera plugin for the emulation of complete, realistic cyberattack chains.☆54Updated 4 months ago