AhmedSalem2 / Updates-LeakView external linksLinks
The code for our Updates-Leak paper
☆17Jul 23, 2020Updated 5 years ago
Alternatives and similar repositories for Updates-Leak
Users that are interested in Updates-Leak are comparing it to the libraries listed below
Sorting:
- [ICLR'21] Dataset Inference for Ownership Resolution in Machine Learning☆32Oct 10, 2022Updated 3 years ago
- [Preprint] On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping☆10Feb 27, 2020Updated 5 years ago
- [CCS-LAMPS'24] LLM IP Protection Against Model Merging☆16Oct 14, 2024Updated last year
- verifying machine unlearning by backdooring☆20Mar 25, 2023Updated 2 years ago
- ☆20Jun 1, 2022Updated 3 years ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆71Feb 13, 2024Updated 2 years ago
- Code for "On the Trade-off between Adversarial and Backdoor Robustness" (NIPS 2020)☆17Nov 11, 2020Updated 5 years ago
- Pytorch implementation of backdoor unlearning.☆21Jun 8, 2022Updated 3 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Nov 25, 2019Updated 6 years ago
- Official code for the paper "Membership Inference Attacks Against Recommender Systems" (ACM CCS 2021)☆20Oct 8, 2024Updated last year
- ☆19Mar 6, 2023Updated 2 years ago
- ☆27Oct 17, 2022Updated 3 years ago
- ☆45Nov 10, 2019Updated 6 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago
- ☆31Oct 7, 2021Updated 4 years ago
- [ICML 2024] Sparse Model Inversion: Efficient Inversion of Vision Transformers with Less Hallucination☆13Apr 29, 2025Updated 9 months ago
- The reproduction of the paper Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning.☆63Feb 2, 2023Updated 3 years ago
- The code is for our NeurIPS 2019 paper: https://arxiv.org/abs/1910.04749☆34Mar 28, 2020Updated 5 years ago
- Code for Membership Inference Attack against Machine Learning Models (in Oakland 2017)☆199Nov 15, 2017Updated 8 years ago
- Implementation of the Model Inversion Attack introduced with Model Inversion Attacks that Exploit Confidence Information and Basic Counte…☆85Feb 26, 2023Updated 2 years ago
- ☆12Dec 22, 2025Updated last month
- Code for "Zero-Shot Out-of-Distribution Detection with Feature Correlations"☆13Jan 19, 2020Updated 6 years ago
- Research simulation toolkit for federated learning☆13Nov 7, 2020Updated 5 years ago
- TKDE'23: A Survey and Experimental Study on Privacy-Preserving Trajectory Data Publishing☆12May 5, 2023Updated 2 years ago
- Tool for testing IPv4 and IPv6 DHCP services☆13Mar 27, 2020Updated 5 years ago
- ConvexPolytopePosioning☆37Jan 10, 2020Updated 6 years ago
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- Container Virtual Service☆13Aug 10, 2022Updated 3 years ago
- 1-step Q Learning from the paper "Asynchronous Methods for Deep Reinforcement Learning"☆12Mar 13, 2017Updated 8 years ago
- Attacks using out-of-distribution adversarial examples☆11Nov 19, 2019Updated 6 years ago
- This is the repository for the code and artifacts related to the CCS2022 paper: C2C: Fine-grained Configuration-driven System Call Filter…☆11Nov 4, 2022Updated 3 years ago
- This project process eBPF events into Prometheus metrics via a Go user-space application. A Grafana dashboard is included to visualize Ke…☆14Apr 22, 2025Updated 9 months ago
- Official repo of the paper Deep Regression Unlearning accepted in ICML 2023☆14Jun 14, 2023Updated 2 years ago
- Official Code Implementation for the CCS 2022 Paper "On the Privacy Risks of Cell-Based NAS Architectures"☆11Nov 21, 2022Updated 3 years ago
- Simulation code for Federated Learning with Over-the-Air Computation.☆11Sep 11, 2020Updated 5 years ago
- Code for paper: "RemovalNet: DNN model fingerprinting removal attack", IEEE TDSC 2023.☆10Nov 27, 2023Updated 2 years ago
- Heterogeneous Model Reuse via Optimizing Multiparty Multiclass Margin☆11Jan 15, 2020Updated 6 years ago
- Official code of the paper "A Stealthy Wrongdoer: Feature-Oriented Reconstruction Attack against Split Learning".☆15Sep 11, 2024Updated last year
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago