HumanCompatibleAI / tensor-trust
A prompt injection game to collect data for robust ML research
☆55Updated 2 months ago
Alternatives and similar repositories for tensor-trust:
Users that are interested in tensor-trust are comparing it to the libraries listed below
- Dataset for the Tensor Trust project☆39Updated last year
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆45Updated 6 months ago
- ☆59Updated 5 months ago
- Code to break Llama Guard☆31Updated last year
- ☆52Updated 2 months ago
- PAL: Proxy-Guided Black-Box Attack on Large Language Models☆50Updated 8 months ago
- ☆31Updated 5 months ago
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆62Updated last year
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆111Updated 10 months ago
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆43Updated 11 months ago
- Fine-tuning base models to build robust task-specific models☆29Updated last year
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆68Updated last year
- ☆86Updated last year
- A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents.☆130Updated 3 weeks ago
- Papers about red teaming LLMs and Multimodal models.☆111Updated 5 months ago
- ☆31Updated last year
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆141Updated last year
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆44Updated 2 weeks ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆60Updated 3 months ago
- Implementation of BEAST adversarial attack for language models (ICML 2024)☆82Updated 11 months ago
- ☆59Updated 9 months ago
- ☆20Updated 3 months ago
- [ICLR 2025] Dissecting Adversarial Robustness of Multimodal LM Agents☆80Updated 2 months ago
- Fluent student-teacher redteaming☆20Updated 9 months ago
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆66Updated last year
- ☆93Updated last month
- ☆35Updated 6 months ago
- General research for Dreadnode☆21Updated 10 months ago
- Whispers in the Machine: Confidentiality in LLM-integrated Systems☆35Updated last month
- ☆17Updated last year