fjxmlzn / private-evolution-papers
The collection of papers about Private Evolution
☆11Updated last month
Alternatives and similar repositories for private-evolution-papers:
Users that are interested in private-evolution-papers are comparing it to the libraries listed below
- The official implement of paper "Does Federated Learning Really Need Backpropagation?"☆23Updated 2 years ago
- Private Adaptive Optimization with Side Information (ICML '22)☆16Updated 2 years ago
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆22Updated 3 years ago
- R-GAP: Recursive Gradient Attack on Privacy [Accepted at ICLR 2021]☆35Updated 2 years ago
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆57Updated 2 years ago
- ☆20Updated 2 years ago
- ☆20Updated 3 months ago
- Code and checkpoints of compressed networks for the paper titled "HYDRA: Pruning Adversarially Robust Neural Networks" (NeurIPS 2020) (ht…☆90Updated 2 years ago
- ☆12Updated last year
- [ICLR2023] Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning (https://arxiv.org/abs/2210.0022…☆40Updated 2 years ago
- ☆10Updated 2 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆68Updated last year
- ☆86Updated 2 years ago
- ☆23Updated 3 years ago
- ☆27Updated 2 years ago
- LAMP: Extracting Text from Gradients with Language Model Priors (NeurIPS '22)☆23Updated 2 years ago
- [ICLR2022] Efficient Split-Mix federated learning for in-situ model customization during both training and testing time☆42Updated last year
- [ICLR 2023] Test-time Robust Personalization for Federated Learning☆53Updated last year
- ☆44Updated 7 months ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- [ICLR 2023] "Combating Exacerbated Heterogeneity for Robust Models in Federated Learning"☆32Updated last year
- ☆53Updated last year
- RAB: Provable Robustness Against Backdoor Attacks☆39Updated last year
- ☆29Updated 2 years ago
- ☆12Updated 3 years ago
- [ECCV24] "Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning" by Chongyu Fan*, Jiancheng Liu*, Alfred Hero, …☆23Updated 5 months ago
- Certified Removal from Machine Learning Models☆65Updated 3 years ago
- Official implementation of "RelaxLoss: Defending Membership Inference Attacks without Losing Utility" (ICLR 2022)☆48Updated 2 years ago
- Diverse Client Selection for Federated Learning via Submodular Maximization☆29Updated 2 years ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆25Updated 4 months ago