leondz / lm_risk_cards
Risks and targets for assessing LLMs & LLM vulnerabilities
☆30Updated 9 months ago
Alternatives and similar repositories for lm_risk_cards:
Users that are interested in lm_risk_cards are comparing it to the libraries listed below
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆100Updated this week
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated 11 months ago
- LLM security and privacy☆47Updated 4 months ago
- A benchmark for prompt injection detection systems.☆96Updated 3 weeks ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆79Updated 9 months ago
- A collection of prompt injection mitigation techniques.☆20Updated last year
- Explore AI Supply Chain Risk with the AI Risk Database☆52Updated 9 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆60Updated 10 months ago
- This project investigates the security of large language models by performing binary classification of a set of input prompts to discover…☆37Updated last year
- Payloads for Attacking Large Language Models☆75Updated 7 months ago
- ☆84Updated this week
- ☆119Updated 3 months ago
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses☆174Updated last month
- ATLAS tactics, techniques, and case studies data☆57Updated 5 months ago
- LLM | Security | Operations in one github repo with good links and pictures.☆24Updated 2 months ago
- Dropbox LLM Security research code and results☆221Updated 9 months ago
- Whispers in the Machine: Confidentiality in LLM-integrated Systems☆34Updated this week
- Universal Robustness Evaluation Toolkit (for Evasion)☆31Updated 11 months ago
- The Privacy Adversarial Framework (PAF) is a knowledge base of privacy-focused adversarial tactics and techniques. PAF is heavily inspire…☆56Updated last year
- ☆54Updated 8 months ago
- Papers about red teaming LLMs and Multimodal models.☆99Updated 3 months ago
- Codebase of https://arxiv.org/abs/2410.14923☆44Updated 4 months ago
- ☆29Updated 3 months ago
- 🧠 LLMFuzzer - Fuzzing Framework for Large Language Models 🧠 LLMFuzzer is the first open-source fuzzing framework specifically designed …☆259Updated last year
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆41Updated 4 months ago
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆49Updated 6 months ago
- Secure Jupyter Notebooks and Experimentation Environment☆67Updated 3 weeks ago
- Tree of Attacks (TAP) Jailbreaking Implementation☆100Updated last year
- OWASP Machine Learning Security Top 10 Project☆81Updated last month
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆37Updated last month