TrustAI-laboratory / LMAPLinks
LMAP (large language model mapper) is like NMAP for LLM, is an LLM Vulnerability Scanner and Zero-day Vulnerability Fuzzer.
☆28Updated last year
Alternatives and similar repositories for LMAP
Users that are interested in LMAP are comparing it to the libraries listed below
Sorting:
- Payloads for Attacking Large Language Models☆116Updated last week
- Delving into the Realm of LLM Security: An Exploration of Offensive and Defensive Tools, Unveiling Their Present Capabilities.☆166Updated 2 years ago
- A LLM explicitly designed for getting hacked☆166Updated 2 years ago
- A curated list of awesome LLM Red Teaming training, resources, and tools.☆71Updated 4 months ago
- We present MAPTA, a multi-agent system for autonomous web application security assessment that combines large language model orchestratio…☆87Updated 4 months ago
- ☆101Updated last month
- LLM | Security | Operations in one github repo with good links and pictures.☆86Updated this week
- Prompt Injections Everywhere☆176Updated last year
- A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injection☆430Updated 8 months ago
- A guide to LLM hacking: fundamentals, prompt injection, offense, and defense☆180Updated 2 years ago
- Prompt Injection Primer for Engineers☆542Updated 2 years ago
- Securing LLM's Against Top 10 OWASP Large Language Model Vulnerabilities 2024☆20Updated last year
- LLM security and privacy☆53Updated last year
- A collection of awesome resources related AI security☆397Updated last week
- ☆236Updated 3 weeks ago
- ☆129Updated this week
- Payloads for AI Red Teaming and beyond☆314Updated 4 months ago
- Penetration Testing AI Assistant based on open source LLMs.☆115Updated 9 months ago
- Prototype of Full Agentic Application Security Testing, FAAST = SAST + DAST + LLM agents☆67Updated 8 months ago
- ☆351Updated 6 months ago
- The Arcanum Prompt Injection Taxonomy☆427Updated last month
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆152Updated last year
- Cybersecurity Intelligent Pentesting Helper for Ethical Researcher (CIPHER). Fine tuned LLM for penetration testing guidance based on wri…☆35Updated last year
- This project investigates the security of large language models by performing binary classification of a set of input prompts to discover…☆55Updated 2 years ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆314Updated last year
- A security system to protect your vibecoded apps☆234Updated this week
- Curated resources, research, and tools for securing AI systems☆369Updated 2 weeks ago
- ☆120Updated 5 months ago
- ☆75Updated 11 months ago
- Dropbox LLM Security research code and results☆252Updated last year