Code for the paper "RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models" (EMNLP 2021)
☆25Oct 21, 2021Updated 4 years ago
Alternatives and similar repositories for RAP
Users that are interested in RAP are comparing it to the libraries listed below
Sorting:
- Code for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)☆24Dec 9, 2021Updated 4 years ago
- ☆18Aug 15, 2022Updated 3 years ago
- Code for the paper "Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models" (NAACL-…☆44Jul 26, 2021Updated 4 years ago
- ☆25Jun 23, 2021Updated 4 years ago
- Hidden backdoor attack on NLP systems☆47Nov 14, 2021Updated 4 years ago
- The code implementation of MuScleLoRA (Accepted in ACL 2024)☆10Dec 1, 2024Updated last year
- ☆18Jul 1, 2021Updated 4 years ago
- Official implementation of the EMNLP 2021 paper "ONION: A Simple and Effective Defense Against Textual Backdoor Attacks"☆36Nov 3, 2021Updated 4 years ago
- TrojanLM: Trojaning Language Models for Fun and Profit☆16Jun 17, 2021Updated 4 years ago
- Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"☆43Sep 11, 2022Updated 3 years ago
- Code and data of the ACL 2021 paper "Turn the Combination Lock: Learnable Textual Backdoor Attacks via Word Substitution"☆16Jun 29, 2021Updated 4 years ago
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Oct 8, 2020Updated 5 years ago
- ☆19Mar 9, 2024Updated last year
- Code for paper "The Philosopher’s Stone: Trojaning Plugins of Large Language Models"☆27Sep 11, 2024Updated last year
- An open-source toolkit for textual backdoor attack and defense (NeurIPS 2022 D&B, Spotlight)☆200Apr 10, 2023Updated 2 years ago
- This work corroborates a run-time Trojan detection method exploiting STRong Intentional Perturbation of inputs, is a multi-domain Trojan …☆10Mar 7, 2021Updated 4 years ago
- The code implementation of GraCeFul (Accepted in COLING 2025)☆13Jan 27, 2025Updated last year
- Implemention of "Robust Watermarking of Neural Network with Exponential Weighting" in TensorFlow.☆13Dec 2, 2020Updated 5 years ago
- This is the implementation for IEEE S&P 2022 paper "Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Secur…☆11Aug 24, 2022Updated 3 years ago
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Jul 11, 2023Updated 2 years ago
- ☆70Feb 4, 2024Updated 2 years ago
- [ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretraining☆19Feb 26, 2025Updated last year
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆14Jun 21, 2024Updated last year
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Jan 13, 2023Updated 3 years ago
- Backdooring Neural Code Search☆14Sep 8, 2023Updated 2 years ago
- Research Artifact of USENIX Security 2023 Paper: Precise and Generalized Robustness Certification for Neural Networks☆13Jun 20, 2023Updated 2 years ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆109Sep 27, 2024Updated last year
- ☆13Oct 21, 2021Updated 4 years ago
- DataLoader for TinyImageNet Dataset☆12Sep 15, 2021Updated 4 years ago
- ☆16Aug 14, 2022Updated 3 years ago
- Code for CascadeBERT, Findings of EMNLP 2021☆12Mar 30, 2022Updated 3 years ago
- Official repo for "ProSec: Fortifying Code LLMs with Proactive Security Alignment"☆17Updated this week
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆36Dec 28, 2021Updated 4 years ago
- Code for Backdoor Attacks Against Dataset Distillation☆37Apr 19, 2023Updated 2 years ago
- TextGuard: Provable Defense against Backdoor Attacks on Text Classification☆13Nov 7, 2023Updated 2 years ago
- AI Model Security Reading Notes☆41Mar 14, 2025Updated 11 months ago
- [ICSE-SEIP'21] Robustness of on-device Models: AdversarialAttack to Deep Learning Models on Android Apps☆16Jun 2, 2022Updated 3 years ago
- Investigating Robustness and Interpretability of Link Prediction via Adversarial Modifications☆19Jul 20, 2020Updated 5 years ago
- ☆22May 28, 2025Updated 9 months ago