AI-secure / adversarial-glueLinks
[NeurIPS 2021] "Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of Language Models" by Boxin Wang*, Chejian Xu*, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao, Ahmed Hassan Awadallah, Bo Li.
☆13Updated 2 years ago
Alternatives and similar repositories for adversarial-glue
Users that are interested in adversarial-glue are comparing it to the libraries listed below
Sorting:
- ☆43Updated 2 years ago
- ☆48Updated 10 months ago
- The official repository of the paper "On the Exploitability of Instruction Tuning".☆66Updated last year
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆90Updated last year
- Official Repository for Dataset Inference for LLMs☆43Updated last year
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆66Updated last year
- ☆23Updated 11 months ago
- TACL 2025: Investigating Adversarial Trigger Transfer in Large Language Models☆19Updated 4 months ago
- NeurIPS'24 - LLM Safety Landscape☆36Updated 2 months ago
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆31Updated 3 years ago
- ☆23Updated last year
- ☆57Updated last year
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆90Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆109Updated last year
- ☆38Updated 2 years ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Updated 11 months ago
- A survey of privacy problems in Large Language Models (LLMs). Contains summary of the corresponding paper along with relevant code☆68Updated last year
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23Updated 2 years ago
- ☆13Updated 3 years ago
- TrustAgent: Towards Safe and Trustworthy LLM-based Agents☆53Updated 10 months ago
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆61Updated 2 years ago
- ☆46Updated last year
- Code for the paper "Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models" (NAACL-…☆43Updated 4 years ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆103Updated last year
- ☆69Updated last year
- [ICLR'24 Spotlight] DP-OPT: Make Large Language Model Your Privacy-Preserving Prompt Engineer☆46Updated last year
- Training data extraction on GPT-2☆194Updated 2 years ago
- ☆37Updated last year
- ☆24Updated 2 years ago