Code and data of the EMNLP 2022 paper "Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP".
☆72Feb 19, 2023Updated 3 years ago
Alternatives and similar repositories for Advbench
Users that are interested in Advbench are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆45Mar 3, 2023Updated 3 years ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆88May 9, 2025Updated 10 months ago
- Code for ICCV2025 paper——IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves☆17Jul 11, 2025Updated 8 months ago
- ☆10Oct 28, 2020Updated 5 years ago
- Code Repository for "A Causal Framework to Quantify the Robustness of Mathematical Reasoning with Language Models".☆15Oct 14, 2022Updated 3 years ago
- [NDSS 2026] Official repo for Odysseus: Jailbreaking Commercial Multimodal LLM-integrated Systems via Dual Steganography☆30Mar 14, 2026Updated last week
- Mostly recording papers about models' trustworthy applications. Intending to include topics like model evaluation & analysis, security, c…☆21May 30, 2023Updated 2 years ago
- ☆60Aug 11, 2024Updated last year
- ☆32Aug 9, 2024Updated last year
- Sensitive-rs is a Rust library for finding, validating, filtering, and replacing sensitive words. It provides efficient algorithms to han…☆22Mar 11, 2026Updated 2 weeks ago
- ☆14May 7, 2024Updated last year
- ☆13Nov 7, 2023Updated 2 years ago
- Official repo for EMNLP'24 paper "SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning"☆29Oct 1, 2024Updated last year
- Implementation for "RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content"☆23Jul 28, 2024Updated last year
- Butler 是一个用于自动化服务管理和任务调度的工具项目。☆16Updated this week
- ☆27Oct 6, 2024Updated last year
- Repo for paper: Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge☆14Feb 20, 2024Updated 2 years ago
- ☆27Jun 5, 2024Updated last year
- ☆19Jun 21, 2025Updated 9 months ago
- ☆165Sep 2, 2024Updated last year
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Jul 17, 2024Updated last year
- Official repo for NeurIPS'24 paper "WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models"☆19Dec 16, 2024Updated last year
- [NLPCC 2024] Shared Task 10: Regulating Large Language Models☆14Jun 12, 2024Updated last year
- [AAAI'25 (Oral)] Jailbreaking Large Vision-language Models via Typographic Visual Prompts☆196Jun 26, 2025Updated 8 months ago
- The dataset and code for the ICLR 2024 paper "Can LLM-Generated Misinformation Be Detected?"☆81Nov 9, 2024Updated last year
- Repository for the Bias Benchmark for QA dataset.☆139Jan 8, 2024Updated 2 years ago
- 🌏 UI component library for the future, based on WebComponent.☆23Nov 12, 2024Updated last year
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆32Jul 11, 2022Updated 3 years ago
- code for the ICLR'22 paper: On Robust Prefix-Tuning for Text Classification☆27Mar 21, 2022Updated 4 years ago
- The official implementation of the paper "Large Scale Knowledge Washing"☆10Jun 12, 2024Updated last year
- [ICML 2025] X-Transfer Attacks: Towards Super Transferable Adversarial Attacks on CLIP☆42Feb 3, 2026Updated last month
- Universal and Transferable Attacks on Aligned Language Models☆4,568Aug 2, 2024Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆130Feb 24, 2025Updated last year
- [NAACL 2022] "SemAttack: Natural Textual Attacks via Different Semantic Spaces" by Boxin Wang, Chejian Xu, Xiangyu Liu, Yu Cheng, Bo Li☆21Jun 11, 2022Updated 3 years ago
- Accepted by ECCV 2024☆193Oct 15, 2024Updated last year
- ☆22Oct 25, 2024Updated last year
- Official repo for the paper "Make Some Noise: Reliable and Efficient Single-Step Adversarial Training" (https://arxiv.org/abs/2202.01181)☆25Oct 17, 2022Updated 3 years ago
- A collection of research papers related to Natural Language Reasoning☆11May 27, 2022Updated 3 years ago
- [COLING 2025] Official repo of paper: "Not Aligned" is Not "Malicious": Being Careful about Hallucinations of Large Language Models' Jail…☆12Jul 26, 2024Updated last year