pasquini-dario / project_mantis
Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks
☆67Updated 5 months ago
Alternatives and similar repositories for project_mantis
Users that are interested in project_mantis are comparing it to the libraries listed below
Sorting:
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆109Updated 4 months ago
- Codebase of https://arxiv.org/abs/2410.14923☆47Updated 6 months ago
- Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to pote…☆165Updated last month
- ChainReactor is a research project that leverages AI planning to discover exploitation chains for privilege escalation on Unix systems. T…☆44Updated 6 months ago
- A Caldera plugin for the emulation of complete, realistic cyberattack chains.☆53Updated 2 months ago
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆22Updated 4 months ago
- ☆39Updated last week
- A MCP server for using Semgrep to scan code for security vulnerabilities.☆148Updated 2 weeks ago
- A very simple open source implementation of Google's Project Naptime☆142Updated last month
- A Model Context Protocol (MCP) server for querying the VirusTotal API.☆51Updated 2 months ago
- An interactive CLI application for interacting with authenticated Jupyter instances.☆53Updated last week
- A YAML based format for describing tools to LLMs, like man pages but for robots!☆71Updated 2 weeks ago
- ☆65Updated 5 months ago
- Secure Code Review AI Agent (SeCoRA) - AI SAST☆48Updated 3 months ago
- ATLAS tactics, techniques, and case studies data☆71Updated 3 weeks ago
- 🤖 A GitHub action that leverages fabric patterns through an agent-based approach☆26Updated 4 months ago
- ☆73Updated 2 weeks ago
- Automated vulnerability discovery and annotation☆67Updated 9 months ago
- Dropbox LLM Security research code and results☆225Updated 11 months ago
- NOVA: The Prompt Pattern Matching☆80Updated 2 weeks ago
- A LLM explicitly designed for getting hacked☆149Updated last year
- https://arxiv.org/abs/2412.02776☆54Updated 5 months ago
- A Completely Modular LLM Reverse Engineering, Red Teaming, and Vulnerability Research Framework.☆46Updated 6 months ago
- Vulnerability-Lookup facilitates quick correlation of vulnerabilities from various sources, independent of vulnerability IDs, and streaml…☆272Updated this week
- A sandbox environment designed for loading, running and profiling a wide range of files, including machine learning models, ELFs, Pickle,…☆318Updated this week
- source code for the offsecml framework☆40Updated 11 months ago
- Framework for Monitoring File Ingestion Source for Yara Matches☆46Updated 2 months ago
- Payloads for Attacking Large Language Models☆83Updated 10 months ago
- ☆40Updated last week
- A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injection☆134Updated 2 weeks ago