Code for the IEEE S&P 2018 paper 'Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning'
☆55Mar 24, 2021Updated 5 years ago
Alternatives and similar repositories for manip-ml
Users that are interested in manip-ml are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆33Nov 27, 2017Updated 8 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 6 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆31Oct 10, 2022Updated 3 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Nov 25, 2019Updated 6 years ago
- Attacking a dog vs fish classification that uses transfer learning inceptionV3☆74Apr 12, 2018Updated 7 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- 对抗样本(Adversarial Examples)和投毒攻击(Poisoning Attacks)相关资料☆118Jun 3, 2019Updated 6 years ago
- Seminar 2016☆25Aug 19, 2024Updated last year
- code for model-targeted poisoning☆12Oct 3, 2023Updated 2 years ago
- Craft poisoned data using MetaPoison☆54Apr 5, 2021Updated 4 years ago
- Code for the paper "Deep Partition Aggregation: Provable Defenses against General Poisoning Attacks"☆13Aug 22, 2022Updated 3 years ago
- Code Implementation for Traceback of Data Poisoning Attacks in Neural Networks☆21Aug 15, 2022Updated 3 years ago
- ☆27Oct 17, 2022Updated 3 years ago
- Code implementation of the paper "Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks", at IEEE Security and P…☆314Feb 28, 2020Updated 6 years ago
- This is a simple backdoor model for federated learning.We use MNIST as the original data set for data attack and we use CIFAR-10 data set…☆14Jun 19, 2020Updated 5 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Implementation for Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder (EMNLP-Findings 2020)☆15Oct 8, 2020Updated 5 years ago
- Input-aware Dynamic Backdoor Attack (NeurIPS 2020)☆39Jul 22, 2024Updated last year
- Implementations of data poisoning attacks against neural networks and related defenses.☆105Jul 16, 2024Updated last year
- A unified benchmark problem for data poisoning attacks☆162Oct 4, 2023Updated 2 years ago
- ☆18Sep 29, 2020Updated 5 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆153Oct 3, 2022Updated 3 years ago
- Code for the paper "Quantifying Privacy Leakage in Graph Embedding" published in MobiQuitous 2020☆18Nov 11, 2021Updated 4 years ago
- code for TPDS paper "Towards Fair and Privacy-Preserving Federated Deep Models"☆32Jun 16, 2022Updated 3 years ago
- 👿→😈☆25Dec 19, 2017Updated 8 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- A paper summary of Backdoor Attack against Neural Network☆13Aug 9, 2019Updated 6 years ago
- Trojan Attack on Neural Network☆190Mar 25, 2022Updated 4 years ago
- ☆27Dec 15, 2022Updated 3 years ago
- ☆102Oct 19, 2020Updated 5 years ago
- Experiments on Data Poisoning Regression Learning☆12Oct 5, 2020Updated 5 years ago
- On Training Robust PDF Malware Classifiers (Usenix Security'20) https://arxiv.org/abs/1904.03542☆30Dec 27, 2021Updated 4 years ago
- Caffe code for the paper "Adversarial Manipulation of Deep Representations"☆17Nov 6, 2017Updated 8 years ago
- ☆19Jun 21, 2021Updated 4 years ago
- Mitigating Adversarial Effects Through Randomization☆120Mar 20, 2018Updated 8 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆56May 28, 2019Updated 6 years ago
- ☆25Feb 21, 2019Updated 7 years ago
- RAB: Provable Robustness Against Backdoor Attacks