xlhex / cater_neuripsLinks
☆6Updated 2 years ago
Alternatives and similar repositories for cater_neurips
Users that are interested in cater_neurips are comparing it to the libraries listed below
Sorting:
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆29Updated 2 years ago
- Code for the paper "Rethinking Stealthiness of Backdoor Attack against NLP Models" (ACL-IJCNLP 2021)☆24Updated 3 years ago
- Code for the paper "BadPrompt: Backdoor Attacks on Continuous Prompts"☆36Updated 11 months ago
- ☆21Updated last year
- Code for the paper "RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models" (EMNLP 2021)☆24Updated 3 years ago
- ☆20Updated last year
- ☆1Updated last year
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- ☆29Updated last year
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Updated 5 years ago
- [EMNLP 2022] Distillation-Resistant Watermarking (DRW) for Model Protection in NLP☆13Updated last year
- ☆43Updated 2 years ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Updated 5 years ago
- The code for paper "The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)", exploring the privacy risk o…☆48Updated 4 months ago
- ☆9Updated 4 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆50Updated 2 years ago
- Code for the WWW'23 paper "Sanitizing Sentence Embeddings (and Labels) for Local Differential Privacy"☆12Updated 2 years ago
- ☆19Updated 4 years ago
- ☆24Updated 2 years ago
- Code and data for paper "A Semantic Invariant Robust Watermark for Large Language Models" accepted by ICLR 2024.☆32Updated 7 months ago
- ☆24Updated 2 years ago
- [S&P'24] Test-Time Poisoning Attacks Against Test-Time Adaptation Models☆18Updated 4 months ago
- ☆24Updated last year
- ☆9Updated 4 years ago
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆55Updated last year
- Code and data to go with the Zhu et al. paper "An Objective for Nuanced LLM Jailbreaks"☆32Updated 6 months ago
- ☆31Updated 3 years ago
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Updated 4 years ago
- ☆25Updated 4 years ago
- ☆22Updated 10 months ago