pasquini-dario / project_mantisLinks
Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks
☆76Updated 2 months ago
Alternatives and similar repositories for project_mantis
Users that are interested in project_mantis are comparing it to the libraries listed below
Sorting:
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆123Updated 7 months ago
- ☆53Updated 3 months ago
- Dropbox LLM Security research code and results☆232Updated last year
- NOVA: The Prompt Pattern Matching☆144Updated last week
- Code Repository for: AIRTBench: Measuring Autonomous AI Red Teaming Capabilities in Language Models☆68Updated this week
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆164Updated last year
- A very simple open source implementation of Google's Project Naptime☆161Updated 4 months ago
- source code for the offsecml framework☆41Updated last year
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆293Updated 11 months ago
- ☆288Updated last week
- ☆15Updated 7 months ago
- ☆256Updated 6 months ago
- A knowledge source about TTPs used to target GenAI-based systems, copilots and agents☆43Updated 2 weeks ago
- AIGoat: A deliberately Vulnerable AI Infrastructure. Learn AI security through solving our challenges.☆244Updated 3 months ago
- A Completely Modular LLM Reverse Engineering, Red Teaming, and Vulnerability Research Framework.☆48Updated 9 months ago
- Payloads for Attacking Large Language Models☆92Updated 2 months ago
- ☆61Updated 2 weeks ago
- Codebase of https://arxiv.org/abs/2410.14923☆49Updated 9 months ago
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆27Updated 7 months ago
- Awesome products for securing AI systems includes open source and commercial options and an infographic licensed CC-BY-SA-4.0.☆65Updated last year
- ☆45Updated this week
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆402Updated last year
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to pote…☆179Updated 4 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆164Updated 2 years ago
- A sandbox environment designed for loading, running and profiling a wide range of files, including machine learning models, ELFs, Pickle,…☆325Updated this week
- A LLM explicitly designed for getting hacked☆155Updated 2 years ago
- A collection of agents that use Large Language Models (LLMs) to perform tasks common on our day to day jobs in cyber security.☆149Updated last year
- ComPromptMized: Unleashing Zero-click Worms that Target GenAI-Powered Applications☆203Updated last year
- A Caldera plugin for the emulation of complete, realistic cyberattack chains.☆56Updated 4 months ago
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆75Updated 3 months ago