javirandor / wdrLinks
☆10Updated 3 years ago
Alternatives and similar repositories for wdr
Users that are interested in wdr are comparing it to the libraries listed below
Sorting:
- Code base for the EMNLP 2021 paper, "Multi-granularity Textual Adversarial Attack with Behavior Cloning".☆13Updated 3 years ago
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆29Updated 2 years ago
- ACL 2021 - Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble☆18Updated 2 years ago
- Contextualized Perturbation for Textual Adversarial Attack, NAACL 2021☆43Updated 3 years ago
- codes for "Searching for an Effective Defender:Benchmarking Defense against Adversarial Word Substitution"☆31Updated last year
- [ACL2024-Main] Data and Code for WaterBench: Towards Holistic Evaluation of LLM Watermarks☆26Updated last year
- Code and data of the ACL-IJCNLP 2021 paper "Hidden Killer: Invisible Textual Backdoor Attacks with Syntactic Trigger"☆43Updated 2 years ago
- ☆21Updated 3 months ago
- Official Code for ACL 2023 paper: "Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confid…☆23Updated 2 years ago
- SAFER: A Structure-free Approach For cErtified Robustness to Adversarial Word Substitutions (ACL 2020)☆31Updated 4 years ago
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆51Updated 8 months ago
- [Findings of ACL 2023] Bridge the Gap Between CV and NLP! A Optimization-based Textual Adversarial Attack Framework.☆13Updated last year
- ☆38Updated last year
- Official implementation of the EMNLP 2021 paper "ONION: A Simple and Effective Defense Against Textual Backdoor Attacks"☆33Updated 3 years ago
- ☆17Updated last year
- ☆22Updated 3 months ago
- Adversarial Training with Fast Gradient Projection Method against Synonym Substitution based Text Attacks☆24Updated 4 years ago
- Generating Natural Language Adversarial Examples through Probability Weighted Word Saliency☆69Updated 2 years ago
- code of paper "Defending Against Alignment-Breaking Attacks via Robustly Aligned LLM"☆12Updated last year
- ☆41Updated 8 months ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆81Updated 9 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆79Updated 2 months ago
- For Certified Robustness to Text Adversarial Attacks by Randomized [MASK]☆16Updated 8 months ago
- Natural Universal Trigger Search (NUTS)☆21Updated 4 years ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆85Updated last year
- ☆27Updated last year
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156☆33Updated last year
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.☆32Updated 7 months ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆24Updated 11 months ago
- Natural Language Attacks in a Hard Label Black Box Setting.☆47Updated 4 years ago