Eric-Wallace / data-poisoningLinks
Concealed Data Poisoning Attacks on NLP Models
☆21Updated 2 years ago
Alternatives and similar repositories for data-poisoning
Users that are interested in data-poisoning are comparing it to the libraries listed below
Sorting:
- Code for "Imitation Attacks and Defenses for Black-box Machine Translations Systems"☆35Updated 5 years ago
- Code for the paper "Weight Poisoning Attacks on Pre-trained Models" (ACL 2020)☆143Updated 4 months ago
- Code for "Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?"☆45Updated 2 years ago
- Implementation of experiments in paper "Learning from Rules Generalizing Labeled Exemplars" to appear in ICLR2020 (https://openreview.net…☆50Updated 2 years ago
- IPython notebook with synthetic experiments for AFLite, based on the ICML 2020 paper, "Adversarial Filters of Dataset Biases".☆16Updated 5 years ago
- The code reproduces the results of the experiments in the paper. In particular, it performs experiments in which machine-learning models …☆20Updated 4 years ago
- [EMNLP 2020] "T3: Tree-Autoencoder Constrained Adversarial Text Generation for Targeted Attack" by Boxin Wang, Hengzhi Pei, Boyuan Pan, Q…☆26Updated 4 years ago
- EMNLP BlackBox NLP 2020: Searching for a Search Method: Benchmarking Search Algorithms for Generating NLP Adversarial Examples☆26Updated 5 years ago
- ☆25Updated 5 years ago
- ☆64Updated 3 years ago
- A framework for adversarial attacks against token classification models☆33Updated 4 years ago
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Updated 5 years ago
- ☆62Updated 4 years ago
- (ICML 2021) Mandoline: Model Evaluation under Distribution Shift☆30Updated 4 years ago
- Code for the 2019 TACL Paper "Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples"☆36Updated 6 years ago
- OOD Generalization and Detection (ACL 2020)☆59Updated 5 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 5 years ago
- Library and experiments for attacking machine learning in discrete domains☆47Updated 3 years ago
- to add☆20Updated 6 years ago
- A2T: Towards Improving Adversarial Training of NLP Models (EMNLP 2021 Findings)☆26Updated 4 years ago
- Code for the TACL paper "Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased Proximities in Word Embeddings"☆16Updated 5 years ago
- Model Patching: Closing the Subgroup Performance Gap with Data Augmentation☆42Updated 5 years ago
- ☆51Updated 7 years ago
- [ICLR 2021] "InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective" by Boxin Wang, Shuohang Wang, Y…☆85Updated 2 years ago
- diagNNose is a Python library that facilitates a broad set of tools for analysing hidden activations of neural models.☆82Updated 2 years ago
- Code for paper "When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data"☆14Updated 4 years ago
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆27Updated 3 years ago
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆31Updated 4 years ago
- "Predict, then Interpolate: A Simple Algorithm to Learn Stable Classifiers" ICML 2021☆18Updated 4 years ago
- [ICML 2021] Towards Understanding and Mitigating Social Biases in Language Models☆61Updated 3 years ago