leondz / lm_risk_cards
Risks and targets for assessing LLMs & LLM vulnerabilities
☆30Updated 10 months ago
Alternatives and similar repositories for lm_risk_cards:
Users that are interested in lm_risk_cards are comparing it to the libraries listed below
- A repository of Language Model Vulnerabilities and Exposures (LVEs).☆108Updated last year
- A collection of prompt injection mitigation techniques.☆20Updated last year
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆81Updated 10 months ago
- LLM security and privacy☆48Updated 5 months ago
- ☆90Updated last month
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆43Updated 5 months ago
- A benchmark for prompt injection detection systems.☆99Updated last month
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆116Updated last week
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆63Updated 11 months ago
- Papers about red teaming LLMs and Multimodal models.☆105Updated 4 months ago
- ☆31Updated 4 months ago
- OWASP Machine Learning Security Top 10 Project☆83Updated 2 months ago
- Universal Robustness Evaluation Toolkit (for Evasion)☆32Updated last year
- This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses☆182Updated 2 months ago
- LLM | Security | Operations in one github repo with good links and pictures.☆24Updated 3 months ago
- Dropbox LLM Security research code and results☆221Updated 10 months ago
- ATLAS tactics, techniques, and case studies data☆60Updated 2 weeks ago
- Code to conduct an embedding attack on LLMs☆23Updated 2 months ago
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆61Updated last year
- Whispers in the Machine: Confidentiality in LLM-integrated Systems☆35Updated 3 weeks ago
- Secure Jupyter Notebooks and Experimentation Environment☆72Updated last month
- Explore AI Supply Chain Risk with the AI Risk Database☆53Updated 10 months ago
- Project LLM Verification Standard☆41Updated 11 months ago
- Supply chain security for ML☆133Updated this week
- future-proof vulnerability detection benchmark, based on CVEs in open-source repos☆51Updated this week
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆49Updated 7 months ago
- Payloads for Attacking Large Language Models☆77Updated 8 months ago
- ☆42Updated 8 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆67Updated last year
- This project investigates the security of large language models by performing binary classification of a set of input prompts to discover…☆38Updated last year