jsotiro / ThreatModelsLinks
β11Updated last year
Alternatives and similar repositories for ThreatModels
Users that are interested in ThreatModels are comparing it to the libraries listed below
Sorting:
- π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed β¦β332Updated last year
- This repository provides a benchmark for prompt injection attacks and defenses in LLMsβ361Updated last month
- A collection of awesome resources related AI securityβ374Updated last week
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.β384Updated 3 weeks ago
- The automated prompt injection framework for LLM-integrated applications.β243Updated last year
- CTF challenges designed and implemented in machine learning applicationsβ191Updated 2 months ago
- Every practical and proposed defense against prompt injection.β597Updated 10 months ago
- PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to aβ¦β443Updated last year
- β73Updated last year
- TAP: An automated jailbreaking method for black-box LLMsβ202Updated last year
- β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputsβ432Updated last year
- A benchmark for prompt injection detection systems.β153Updated last week
- LLM security and privacyβ52Updated last year
- β98Updated 4 months ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).β112Updated last year
- β54Updated last year
- Papers about red teaming LLMs and Multimodal models.β157Updated 6 months ago
- A curated list of academic events on AI Security & Privacyβ167Updated last year
- A curated list of awesome resources about LLM supply chain security (including papers, security reports and CVEs)β90Updated 11 months ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusalβ812Updated last year
- β99Updated last year
- Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Promptsβ555Updated last year
- Dropbox LLM Security research code and resultsβ251Updated last year
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agaiβ¦β54Updated 9 months ago
- β667Updated 5 months ago
- Learn AI security through a series of vulnerable LLM CTF challenges. No sign ups, no cloud fees, run everything locally on your system.β310Updated last year
- Automated web vulnerability scanning with LLM agentsβ439Updated 6 months ago
- CVE-Bench: A Benchmark for AI Agentsβ Ability to Exploit Real-World Web Application Vulnerabilitiesβ129Updated last month
- A curated list of MLSecOps tools, articles and other resources on security applied to Machine Learning and MLOps systems.β412Updated 4 months ago
- Agent Security Bench (ASB)β155Updated last month