HumanCompatibleAI / tensor-trust-data
Dataset for the Tensor Trust project
☆35Updated 10 months ago
Alternatives and similar repositories for tensor-trust-data:
Users that are interested in tensor-trust-data are comparing it to the libraries listed below
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆74Updated this week
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆125Updated 9 months ago
- Improving Alignment and Robustness with Circuit Breakers☆174Updated 3 months ago
- ☆51Updated last year
- AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM☆51Updated 2 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆66Updated 10 months ago
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆45Updated 3 months ago
- Weak-to-Strong Jailbreaking on Large Language Models☆73Updated 10 months ago
- Röttger et al. (2023): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆77Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆88Updated 10 months ago
- Repo for the research paper "Aligning LLMs to Be Robust Against Prompt Injection"☆32Updated last month
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆157Updated last week
- ☆31Updated last year
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆110Updated 7 months ago
- Code for the ICLR 2024 paper "How to catch an AI liar: Lie detection in black-box LLMs by asking unrelated questions"☆64Updated 7 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆112Updated 6 months ago
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆134Updated 8 months ago
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆73Updated this week
- ☆89Updated last year
- A prompt injection game to collect data for robust ML research☆49Updated 3 weeks ago
- ☆16Updated 5 months ago
- Contains random samples referenced in the paper "Sleeper Agents: Training Robustly Deceptive LLMs that Persist Through Safety Training".☆92Updated 10 months ago
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆43Updated 8 months ago
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆39Updated 2 months ago
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆56Updated last month
- Code release for "Debating with More Persuasive LLMs Leads to More Truthful Answers"☆95Updated 9 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆88Updated 7 months ago
- ☆158Updated last year
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆92Updated 8 months ago
- Code to break Llama Guard☆31Updated last year