leondz / lm_risk_cards
Risks and targets for assessing LLMs & LLM vulnerabilities
☆30Updated 11 months ago
Alternatives and similar repositories for lm_risk_cards
Users that are interested in lm_risk_cards are comparing it to the libraries listed below
Sorting:
- ☆100Updated 2 months ago
- A collection of prompt injection mitigation techniques.☆22Updated last year
- LLM security and privacy☆49Updated 7 months ago
- Explore AI Supply Chain Risk with the AI Risk Database☆56Updated last year
- A benchmark for prompt injection detection systems.☆110Updated this week
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆66Updated last year
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆109Updated last year
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆86Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆154Updated last week
- Whispers in the Machine: Confidentiality in Agentic Systems☆37Updated last week
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆68Updated last year
- ☆32Updated 6 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆46Updated 6 months ago
- Universal Robustness Evaluation Toolkit (for Evasion)☆31Updated last week
- ATLAS tactics, techniques, and case studies data☆71Updated 3 weeks ago
- Secure Jupyter Notebooks and Experimentation Environment☆74Updated 3 months ago
- ☆62Updated 10 months ago
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆50Updated 9 months ago
- This repository provides a benchmark for prompt Injection attacks and defenses☆196Updated 2 weeks ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆45Updated last month
- Papers about red teaming LLMs and Multimodal models.☆115Updated 5 months ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆108Updated last year
- A prompt injection game to collect data for robust ML research☆56Updated 3 months ago
- Code to break Llama Guard☆31Updated last year
- Dropbox LLM Security research code and results☆225Updated 11 months ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆55Updated 2 months ago
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆62Updated last year
- The Privacy Adversarial Framework (PAF) is a knowledge base of privacy-focused adversarial tactics and techniques. PAF is heavily inspire…☆55Updated last year
- LLM Self Defense: By Self Examination, LLMs know they are being tricked☆32Updated 11 months ago
- Project LLM Verification Standard☆43Updated last year