A Comprehensive Assessment of Trustworthiness in GPT Models
☆313Sep 16, 2024Updated last year
Alternatives and similar repositories for DecodingTrust
Users that are interested in DecodingTrust are comparing it to the libraries listed below
Sorting:
- Comprehensive Assessment of Trustworthiness in Multimodal Foundation Models☆27Mar 15, 2025Updated 11 months ago
- Code for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)☆24Dec 9, 2021Updated 4 years ago
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆176Dec 18, 2024Updated last year
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆531Apr 4, 2025Updated 10 months ago
- Codebase for decoding compressed trust.☆25May 7, 2024Updated last year
- We jailbreak GPT-3.5 Turbo’s safety guardrails by fine-tuning it on only 10 adversarially designed examples, at a cost of less than $0.20…☆339Feb 23, 2024Updated 2 years ago
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆61Sep 11, 2025Updated 5 months ago
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆858Aug 16, 2024Updated last year
- [NeurIPS'24] RedCode: Risky Code Execution and Generation Benchmark for Code Agents☆66Nov 14, 2025Updated 3 months ago
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆90May 19, 2024Updated last year
- [ICML 2024] TrustLLM: Trustworthiness in Large Language Models☆618Jun 24, 2025Updated 8 months ago
- ☆195Nov 26, 2023Updated 2 years ago
- ☆17Jun 18, 2025Updated 8 months ago
- TAP: An automated jailbreaking method for black-box LLMs☆221Dec 10, 2024Updated last year
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆266May 13, 2024Updated last year
- ☆698Jul 2, 2025Updated 7 months ago
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,865Jan 24, 2026Updated last month
- Papers and resources related to the security and privacy of LLMs 🤖☆566Jun 8, 2025Updated 8 months ago
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆61Aug 8, 2024Updated last year
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆242Nov 3, 2023Updated 2 years ago
- ☆20Oct 28, 2025Updated 4 months ago
- Universal and Transferable Attacks on Aligned Language Models☆4,521Aug 2, 2024Updated last year
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…