☆52Aug 28, 2021Updated 4 years ago
Alternatives and similar repositories for Inverse_Collaborative_Inference
Users that are interested in Inverse_Collaborative_Inference are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Supplementary code for the paper "UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Lear…☆15Nov 10, 2022Updated 3 years ago
- ☆25Jul 12, 2021Updated 4 years ago
- Privacy attacks on Split Learning☆45Nov 15, 2021Updated 4 years ago
- Code for the CSF 2018 paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting"☆37Jan 28, 2019Updated 7 years ago
- THU-AIR 联邦学习隐私与安全☆13Jun 26, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Code Repo for paper Label Leakage and Protection in Two-party Split Learning (ICLR 2022).☆22Mar 12, 2022Updated 4 years ago
- ☆13Sep 26, 2024Updated last year
- Membership Inference of Generative Models☆15Oct 2, 2019Updated 6 years ago
- Code for paper "Locally Distributed Deep Learning Inference on Edge Device Clusters"☆15Aug 22, 2025Updated 8 months ago
- ☆35Oct 4, 2021Updated 4 years ago
- The reproduction of the paper Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning.☆63Feb 2, 2023Updated 3 years ago
- reveal the vulnerabilities of SplitNN☆31Jun 16, 2022Updated 3 years ago
- Code & supplementary material of the paper Label Inference Attacks Against Federated Learning on Usenix Security 2022.☆87Jun 27, 2023Updated 2 years ago
- Towards Reverse-Engineering Black-Box Neural Networks, ICLR'18☆55Jun 7, 2019Updated 6 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Implementation of the Model Inversion Attack introduced with Model Inversion Attacks that Exploit Confidence Information and Basic Counte…☆84Feb 26, 2023Updated 3 years ago
- ☆25Mar 7, 2024Updated 2 years ago
- Research into model inversion on SplitNN☆18Feb 20, 2024Updated 2 years ago
- Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470☆153Oct 3, 2022Updated 3 years ago
- TextHide: Tackling Data Privacy in Language Understanding Tasks☆30Apr 19, 2021Updated 5 years ago
- Code for Exploiting Unintended Feature Leakage in Collaborative Learning (in Oakland 2019)☆56May 28, 2019Updated 6 years ago
- runs several layers of a deep learning model in TrustZone☆94Aug 8, 2024Updated last year
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆37Feb 20, 2023Updated 3 years ago
- SoK: All You Need to Know About On-Device ML Model Extraction - The Gap Between Research and Practice☆16May 19, 2025Updated 11 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Code for the paper "ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models"☆84Nov 22, 2021Updated 4 years ago
- Code for Findings-ACL 2023 paper: Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Rec…☆46Jun 3, 2024Updated last year
- Code for "Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders" at ICML 2024☆10Sep 18, 2025Updated 7 months ago
- [ICCV 2025] Official implementation of "AD-GS: Object-Aware B-Spline Gaussian Splatting for Self-Supervised Autonomous Driving"☆36Jul 15, 2025Updated 9 months ago
- A method for training neural networks that are provably robust to adversarial attacks. [IJCAI 2019]☆10Sep 3, 2019Updated 6 years ago
- Pytorch Dataloader - GTSRB (German Traffic Sign Recognition)☆20Apr 16, 2019Updated 7 years ago
- Codebase for the paper "Adversarial Attacks on Time Series"☆22Mar 1, 2019Updated 7 years ago
- A library for running membership inference attacks against ML models☆150Dec 8, 2022Updated 3 years ago
- Privacy Risks of Securing Machine Learning Models against Adversarial Examples☆46Nov 25, 2019Updated 6 years ago
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- Benchmark for Anomaly Detection in Semantic Segmentation☆12Feb 27, 2026Updated 2 months ago
- This is a PyTorch implementation of the paper "Reinforcement Learning-Based Black-Box Model Inversion Attacks" accepted by CVPR 2023.☆40May 4, 2023Updated 3 years ago
- Breaking Certifiable Defenses☆17Nov 22, 2022Updated 3 years ago
- Certified robustness "for free" using off-the-shelf diffusion models and classifiers☆44May 25, 2023Updated 2 years ago
- ☆13Apr 12, 2022Updated 4 years ago
- News in Privacy-Preserving Machine Learning☆12Feb 5, 2020Updated 6 years ago
- Paper list of federated learning: About system design☆13Apr 13, 2022Updated 4 years ago