Tongzhou0101 / NNSplitter
This is the official implementation of NNSplitter (ICML'23)
☆12Updated 7 months ago
Alternatives and similar repositories for NNSplitter:
Users that are interested in NNSplitter are comparing it to the libraries listed below
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆34Updated last year
- In the repository we provide a sample code to implement the Targeted Bit Trojan attack.☆18Updated 4 years ago
- [NeurIPS 2021] "Drawing Robust Scratch Tickets: Subnetworks with Inborn Robustness Are Found within Randomly Initialized Networks" by Yon…☆13Updated 2 years ago
- Source code for ECCV 2022 Poster: Data-free Backdoor Removal based on Channel Lipschitzness☆29Updated 2 years ago
- Membership Inference Attacks and Defenses in Neural Network Pruning☆28Updated 2 years ago
- ☆31Updated 4 years ago
- ☆24Updated 2 years ago
- [CCS 2021] "DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation" by Boxin Wang*, Fan Wu*, Yunhui Long…☆37Updated 3 years ago
- Official Repository for ResSFL (accepted by CVPR '22)☆20Updated 2 years ago
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆55Updated last year
- Recycling Model Updates in Federated Learning: Are Gradient Subspaces Low-Rank?☆14Updated 2 years ago
- ☆38Updated 3 years ago
- [CVPR 2023] "TrojViT: Trojan Insertion in Vision Transformers" by Mengxin Zheng, Qian Lou, Lei Jiang☆12Updated last year
- ☆54Updated last year
- ☆67Updated 2 years ago
- Code release for MPCViT accepted by ICCV 2023☆13Updated last week
- Federated Dynamic Sparse Training☆29Updated 2 years ago
- Code for AAAI 2021 Paper "Membership Privacy for Machine Learning Models Through Knowledge Transfer"☆11Updated 3 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆58Updated 2 years ago
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆51Updated last month
- A summay of existing works on vertical federated/split learning☆15Updated 3 years ago
- ☆15Updated 10 months ago
- The code of AAAI-21 paper titled "Defending against Backdoors in Federated Learning with Robust Learning Rate".☆30Updated 2 years ago
- Official Code Implementation for the CCS 2022 Paper "On the Privacy Risks of Cell-Based NAS Architectures"☆10Updated 2 years ago
- ICML 2022 code for "Neurotoxin: Durable Backdoors in Federated Learning" https://arxiv.org/abs/2206.10341☆67Updated last year
- Defending against Model Stealing via Verifying Embedded External Features☆34Updated 2 years ago
- A Fine-grained Differentially Private Federated Learning against Leakage from Gradients☆12Updated 2 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Updated 2 years ago
- ☆10Updated 2 years ago
- Codes for NeurIPS 2021 paper "Adversarial Neuron Pruning Purifies Backdoored Deep Models"☆56Updated last year