leondz / lm_risk_cardsLinks
Risks and targets for assessing LLMs & LLM vulnerabilities
☆30Updated last year
Alternatives and similar repositories for lm_risk_cards
Users that are interested in lm_risk_cards are comparing it to the libraries listed below
Sorting:
- A benchmark for prompt injection detection systems.☆115Updated 3 weeks ago
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆110Updated last year
- ☆109Updated 2 weeks ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆87Updated last year
- LLM security and privacy☆49Updated 7 months ago
- Explore AI Supply Chain Risk with the AI Risk Database☆58Updated last year
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆69Updated last year
- A collection of prompt injection mitigation techniques.☆23Updated last year
- Secure Jupyter Notebooks and Experimentation Environment☆75Updated 4 months ago
- Project LLM Verification Standard☆44Updated 3 weeks ago
- Papers about red teaming LLMs and Multimodal models.☆121Updated last week
- Whispers in the Machine: Confidentiality in Agentic Systems☆37Updated 2 weeks ago
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆175Updated this week
- ☆44Updated last month
- ATLAS tactics, techniques, and case studies data☆73Updated last month
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆51Updated 9 months ago
- Dropbox LLM Security research code and results☆228Updated last year
- ☆34Updated 6 months ago
- The Privacy Adversarial Framework (PAF) is a knowledge base of privacy-focused adversarial tactics and techniques. PAF is heavily inspire…☆56Updated last year
- ☆63Updated 11 months ago
- This repository provides a benchmark for prompt Injection attacks and defenses☆216Updated this week
- Top 10 for Agentic AI (AI Agent Security)☆110Updated last week
- Universal Robustness Evaluation Toolkit (for Evasion)☆31Updated last month
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆63Updated last year
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆49Updated 7 months ago
- Codebase of https://arxiv.org/abs/2410.14923☆47Updated 7 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆109Updated last year
- This project investigates the security of large language models by performing binary classification of a set of input prompts to discover…☆39Updated last year
- An Execution Isolation Architecture for LLM-Based Agentic Systems☆80Updated 4 months ago
- ⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs☆389Updated last year