Code for the IEEE S&P 2018 paper 'Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning'
☆55Mar 24, 2021Updated 4 years ago
Alternatives and similar repositories for manip-ml
Users that are interested in manip-ml are comparing it to the libraries listed below
Sorting:
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Nov 25, 2019Updated 6 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Oct 10, 2022Updated 3 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Mar 28, 2020Updated 5 years ago
- ☆33Nov 27, 2017Updated 8 years ago
- ☆27Oct 17, 2022Updated 3 years ago
- Seminar 2016☆25Aug 19, 2024Updated last year
- Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and P…☆314Feb 28, 2020Updated 6 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆13Aug 22, 2022Updated 3 years ago
- Craft poisoned data using MetaPoison☆54Apr 5, 2021Updated 4 years ago
- ☆15Jul 27, 2023Updated 2 years ago
- A paper summary of Backdoor Attack against Neural Network☆13Aug 9, 2019Updated 6 years ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆38Jul 22, 2024Updated last year
- ML research on software vulnerabilities☆18Sep 8, 2019Updated 6 years ago
- Caffe code for the paper "Adversarial Manipulation of Deep Representations"☆17Nov 6, 2017Updated 8 years ago
- ☆18Sep 29, 2020Updated 5 years ago
- Code for "Imitation Attacks and Defenses for Black-box Machine Translations Systems"☆35May 1, 2020Updated 5 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- A unified benchmark problem for data poisoning attacks☆162Oct 4, 2023Updated 2 years ago
- ☆102Oct 19, 2020Updated 5 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆14Jun 19, 2020Updated 5 years ago
- The code for our Updates-Leak paper☆17Jul 23, 2020Updated 5 years ago
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Oct 8, 2020Updated 5 years ago
- Pytorch implementation of backdoor unlearning.☆21Jun 8, 2022Updated 3 years ago
- Accompanying source code for "Runaway Feedback Loops in Predictive Policing"☆17Dec 13, 2017Updated 8 years ago
- ☆67Jul 30, 2019Updated 6 years ago
- ☆22Sep 17, 2024Updated last year
- ☆19Jun 21, 2021Updated 4 years ago
- Official code for the paper "Membership Inference Attacks Against Recommender Systems" (ACM CCS 2021)☆20Oct 8, 2024Updated last year
- Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"☆21Oct 25, 2019Updated 6 years ago
- DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)☆203Aug 5, 2021Updated 4 years ago
- Robust aggregation for federated learning with the RFA algorithm.☆54Sep 13, 2022Updated 3 years ago
- 👿→😈☆25Dec 19, 2017Updated 8 years ago
- Implementations of data poisoning attacks against neural networks and related defenses.☆104Jul 16, 2024Updated last year
- ☆27Dec 15, 2022Updated 3 years ago
- Icon Hash by Python☆12May 28, 2018Updated 7 years ago
- Profit Allocation for Federated Learning☆24Apr 27, 2020Updated 5 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago
- ☆26Jan 25, 2019Updated 7 years ago