spring-epfl / trickster
Library and experiments for attacking machine learning in discrete domains
☆45Updated last year
Related projects ⓘ
Alternatives and complementary repositories for trickster
- Plausible looking adversarial examples for text classification☆92Updated 5 years ago
- Code for the paper "Weight Poisoning Attacks on Pre-trained Models" (ACL 2020)☆138Updated 3 years ago
- Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks [NeurIPS 2019]☆50Updated 4 years ago
- Interfaces for defining Robust ML models and precisely specifying the threat models under which they claim to be secure.☆62Updated 5 years ago
- CodeBase for Paper: "Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers", / Interactive Demo @☆73Updated last year
- Concealed Data Poisoning Attacks on NLP Models☆20Updated last year
- ☆32Updated 6 years ago
- Code for "Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior?"☆44Updated 9 months ago
- A community-run reference for state-of-the-art adversarial example defenses.☆49Updated 3 weeks ago
- Code for "Testing Robustness Against Unforeseen Adversaries"☆80Updated 3 months ago
- ☆28Updated 3 years ago
- Implementation code for the paper "Generating Natural Language Adversarial Examples"☆167Updated 5 years ago
- Code for "Imitation Attacks and Defenses for Black-box Machine Translations Systems"☆36Updated 4 years ago
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆30Updated 3 years ago
- ☆140Updated last month
- ☆37Updated 4 years ago
- Implementation of membership inference and model inversion attacks, extracting training data information from an ML model. Benchmarking …☆99Updated 5 years ago
- EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples☆37Updated 6 years ago
- Code for "Differential Privacy Has Disparate Impact on Model Accuracy" NeurIPS'19☆34Updated 3 years ago
- to add☆20Updated 4 years ago
- Code corresponding to the paper "Adversarial Examples are not Easily Detected..."☆83Updated 7 years ago
- The code reproduces the results of the experiments in the paper. In particular, it performs experiments in which machine-learning models …☆19Updated 3 years ago
- Game-Theoretic Adversarial Machine Learning Library☆57Updated 6 years ago
- Supervised Local Modeling for Interpretability☆28Updated 6 years ago
- Generate adversarial text via gradient methods☆31Updated 5 years ago
- Task-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)☆55Updated 3 years ago
- Lint for privacy☆26Updated 2 years ago
- Code/figures in Right for the Right Reasons☆55Updated 3 years ago
- This repository contains binaries for the multiple teacher approach to learning differential private ML models: https://arxiv.org/abs/161…☆10Updated 7 years ago
- ☆23Updated last year