jsotiro / ThreatModelsLinks
☆12Updated last year
Alternatives and similar repositories for ThreatModels
Users that are interested in ThreatModels are comparing it to the libraries listed below
Sorting:
- ☆133Updated 6 months ago
- Code snippets to reproduce MCP tool poisoning attacks.☆192Updated 9 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆339Updated last year
- CyberGym is a large-scale, high-quality cybersecurity evaluation framework designed to rigorously assess the capabilities of AI agents on…☆108Updated 3 weeks ago
- CTF challenges designed and implemented in machine learning applications☆201Updated 4 months ago
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆384Updated 3 months ago
- A collection of awesome resources related AI security☆533Updated this week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆112Updated last year
- The automated prompt injection framework for LLM-integrated applications.☆253Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆425Updated this week
- Every practical and proposed defense against prompt injection.☆630Updated 11 months ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.☆314Updated last year
- All things specific to LLM Red Teaming Generative AI☆29Updated last year
- A LLM explicitly designed for getting hacked☆166Updated 2 years ago
- XBOW Validation Benchmarks☆467Updated 7 months ago
- Payloads for Attacking Large Language Models☆119Updated 3 weeks ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆79Updated 5 months ago
- A benchmark for prompt injection detection systems.☆158Updated last month
- Dropbox LLM Security research code and results☆254Updated last year
- Adversarial Machine Learning (AML) Capture the Flag (CTF)☆113Updated last year
- A productionized greedy coordinate gradient (GCG) attack tool for large language models (LLMs)☆157Updated last year
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to a…☆452Updated last year
- ☆117Updated 4 months ago
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.☆421Updated 6 months ago
- Automated web vulnerability scanning with LLM agents☆446Updated 7 months ago
- CVE-Bench: A Benchmark for AI Agents’ Ability to Exploit Real-World Web Application Vulnerabilities☆146Updated 3 weeks ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆452Updated 2 years ago
- ☆139Updated last week
- MCPSafetyScanner - Automated MCP safety auditing and remediation using Agents. More info: https://www.arxiv.org/abs/2504.03767☆163Updated 9 months ago
- ☆190Updated last month