☆26Dec 14, 2021Updated 4 years ago
Alternatives and similar repositories for robbing_the_fed
Users that are interested in robbing_the_fed are comparing it to the libraries listed below
Sorting:
- ☆24Jan 27, 2022Updated 4 years ago
- ☆54Sep 11, 2021Updated 4 years ago
- A centralized place for deep thinking code and experiments☆90Aug 9, 2023Updated 2 years ago
- ☆69Feb 17, 2024Updated 2 years ago
- Official implementation of GOAT model (ICML2023)☆38Jul 3, 2023Updated 2 years ago
- ☆38Feb 7, 2024Updated 2 years ago
- ☆18Oct 12, 2022Updated 3 years ago
- Pytorch Datasets for Easy-To-Hard☆29Jan 9, 2025Updated last year
- Code for "The Intrinsic Dimension of Images and Its Impact on Learning" - ICLR 2021 Spotlight https://openreview.net/forum?id=XJk19XzGq2J☆72Apr 16, 2024Updated last year
- Eluding Secure Aggregation in Federated Learning via Model Inconsistency☆13Mar 10, 2023Updated 2 years ago
- Algorithms to recover input data from their gradient signal through a neural network☆314Apr 14, 2023Updated 2 years ago
- Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching☆112Aug 19, 2024Updated last year
- ☆15Dec 7, 2023Updated 2 years ago
- A simple and efficient baseline for data attribution☆11Nov 10, 2023Updated 2 years ago
- ☆10Apr 21, 2022Updated 3 years ago
- Implementations of data poisoning attacks against neural networks and related defenses.☆104Jul 16, 2024Updated last year
- Code repo for the UAI 2023 paper "Learning To Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning".☆16Jun 15, 2024Updated last year
- Surrogate Model Extension (SME): A Fast and Accurate Weight Update Attack on Federated Learning [Accepted at ICML 2023]☆14Mar 31, 2024Updated last year
- Official implementation of "Provable Defense against Privacy Leakage in Federated Learning from Representation Perspective"☆57May 4, 2023Updated 2 years ago
- A unified benchmark problem for data poisoning attacks☆161Oct 4, 2023Updated 2 years ago
- ☆28Sep 13, 2021Updated 4 years ago
- Training vision models with full-batch gradient descent and regularization☆39Feb 14, 2023Updated 3 years ago
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆15Oct 13, 2023Updated 2 years ago
- An empirical investigation of deep learning theory☆16Oct 3, 2019Updated 6 years ago
- Official repo for the paper: Recovering Private Text in Federated Learning of Language Models (in NeurIPS 2022)☆61Mar 13, 2023Updated 2 years ago
- This repository is the official implementation of 'EDEN: Communication-Efficient and Robust Distributed Mean Estimation for Federated Lea…☆14Aug 2, 2022Updated 3 years ago
- Code for the paper "Understanding Generalization through Visualizations"☆65Jan 15, 2021Updated 5 years ago
- ☆16Jul 17, 2022Updated 3 years ago
- Official implementation of our FLAG paper (CVPR2022)☆144Apr 2, 2022Updated 3 years ago
- Implementation of experiments from The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning☆17May 14, 2023Updated 2 years ago
- [SatML 2024] Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk☆16Mar 15, 2025Updated 11 months ago
- Federated Learning with New Knowledge -- explore to incorporate various new knowledge into existing FL systems and evolve these systems t…☆86Feb 7, 2024Updated 2 years ago
- The official code for the publication: "The Close Relationship Between Contrastive Learning and Meta-Learning".☆18Sep 19, 2022Updated 3 years ago
- The official PyTorch implementation - Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from t…☆83May 5, 2022Updated 3 years ago
- Privacy attacks on Split Learning☆43Nov 15, 2021Updated 4 years ago
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆21Oct 1, 2022Updated 3 years ago
- This is the code for our paper `Robust Federated Learning with Attack-Adaptive Aggregation' accepted by FTL-IJCAI'21.☆46Jun 12, 2023Updated 2 years ago
- Data for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder"☆20Oct 26, 2023Updated 2 years ago
- ☆24Dec 8, 2024Updated last year