lmunoz-gonzalez / Poisoning-Attacks-with-Back-gradient-OptimizationView external linksLinks
Example of the attack described in the paper "Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization"
☆21Oct 25, 2019Updated 6 years ago
Alternatives and similar repositories for Poisoning-Attacks-with-Back-gradient-Optimization
Users that are interested in Poisoning-Attacks-with-Back-gradient-Optimization are comparing it to the libraries listed below
Sorting:
- wx☆11Aug 14, 2022Updated 3 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 5 years ago
- Code for paper: "RemovalNet: DNN model fingerprinting removal attack", IEEE TDSC 2023.☆10Nov 27, 2023Updated 2 years ago
- ☆24Apr 14, 2019Updated 6 years ago
- Implementation of Self-supervised-Online-Adversarial-Purification☆13Aug 2, 2021Updated 4 years ago
- ☆50Feb 27, 2021Updated 4 years ago
- Code for "Live Trojan Attacks on Deep Neural Networks" paper☆10May 8, 2020Updated 5 years ago
- The official pytorch implementation of ACM MM 19 paper "MetaAdvDet: Towards Robust Detection of Evolving Adversarial Attacks"☆11Jun 7, 2021Updated 4 years ago
- A unified benchmark problem for data poisoning attacks☆161Oct 4, 2023Updated 2 years ago
- Bullseye Polytope Clean-Label Poisoning Attack☆15Nov 5, 2020Updated 5 years ago
- The code for the "Dynamic Backdoor Attacks Against Machine Learning Models" paper☆16Nov 20, 2023Updated 2 years ago
- ☆18Oct 7, 2022Updated 3 years ago
- ☆16Dec 3, 2021Updated 4 years ago
- ☆18Sep 29, 2020Updated 5 years ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆21Apr 15, 2024Updated last year
- A simple implementation of BadNets on MNIST☆33Jul 29, 2019Updated 6 years ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆20Aug 10, 2024Updated last year
- ☆19Jun 27, 2021Updated 4 years ago
- Official Repository for the CVPR 2020 paper "Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs"☆44Oct 24, 2023Updated 2 years ago
- ☆20Feb 17, 2020Updated 5 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Aug 24, 2022Updated 3 years ago
- Pytorch deep learning object detection using CINIC-10 dataset.☆22Feb 26, 2020Updated 5 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57May 4, 2023Updated 2 years ago
- Universal Adversarial Perturbations (UAPs) for PyTorch☆49Aug 28, 2021Updated 4 years ago
- ☆21Aug 10, 2022Updated 3 years ago
- ☆23Aug 24, 2020Updated 5 years ago
- ☆26Dec 1, 2022Updated 3 years ago
- ☆28Aug 21, 2023Updated 2 years ago
- Code for the paper: Label-Only Membership Inference Attacks☆68Sep 11, 2021Updated 4 years ago
- code we used in Decision Boundary Analysis of Adversarial Examples https://openreview.net/forum?id=BkpiPMbA-☆29Oct 17, 2018Updated 7 years ago
- Network Traffic Identification with Convolutional Neural Networks☆27Jan 23, 2019Updated 7 years ago
- Codes for reproducing the results of the paper "Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness" published at IC…☆27Apr 29, 2020Updated 5 years ago
- Official repository of the paper: Marking Code Without Breaking It: Code Watermarking for Detecting LLM-Generated Code (Findings of EACL …☆12Updated this week
- ☆30Feb 1, 2019Updated 7 years ago
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Oct 10, 2022Updated 3 years ago
- This repository contains the code for paper, ''Cyber-Physical Intrusion Detection System for Unmanned Aerial Vehicles,” in IEEE Transacti…☆12Feb 25, 2024Updated last year
- The reproduction of the paper Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning.☆63Feb 2, 2023Updated 3 years ago
- Watermarking against model extraction attacks in MLaaS. ACM MM 2021.☆34Jul 15, 2021Updated 4 years ago
- Code Implementation for Gotta Catch ’Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks☆32Jun 7, 2022Updated 3 years ago