thinh-dao / BackFedLinks
An Efficient & Standardized Benchmark Suite for Backdoor Attacks in Federated Learning
☆41Updated last week
Alternatives and similar repositories for BackFed
Users that are interested in BackFed are comparing it to the libraries listed below
Sorting:
- Code for the NeurIPS 2024 submission: "DAGER: Extracting Text from Gradients with Language Model Priors"☆18Updated 3 months ago
- ☆361Updated 2 weeks ago
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆38Updated 2 months ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆72Updated last year
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆280Updated 10 months ago
- Awesome Federated Unlearning (FU) Papers (Continually Update)☆106Updated last year
- ✨✨A curated list of latest advances on Large Foundation Models with Federated Learning☆139Updated this week
- A curated list of papers & resources on backdoor attacks and defenses in deep learning.☆225Updated last year
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆208Updated 5 months ago
- ☆55Updated 2 years ago
- An Empirical Study of Federated Unlearning: Efficiency and Effectiveness (Accepted Conference Track Papers at ACML 2023)☆17Updated 2 years ago
- Fast integration of backdoor attacks in machine learning and federated learning.☆57Updated last year
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆42Updated 2 months ago
- FLPoison: Benchmarking Poisoning Attacks and Defenses in Federated Learning☆40Updated 2 months ago
- ☆560Updated 4 months ago
- ☆34Updated last year
- Methods for removing learned data from neural nets and evaluation of those methods☆38Updated 5 years ago
- Code implementation of the paper "Federated Unlearning: How to Efficiently Erase a Client in FL?" published at UpML (part of ICML 2022)☆43Updated 2 months ago
- Composite Backdoor Attacks Against Large Language Models☆20Updated last year
- ☆65Updated 2 years ago
- A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.☆186Updated 2 months ago
- (ACL 2025 - Oral) FedEx-LoRA: Exact Aggregation for Federated and Efficient Fine-Tuning of Foundation Models☆28Updated last month
- This repository contains Python code for the paper "Learn What You Want to Unlearn: Unlearning Inversion Attacks against Machine Unlearni…☆20Updated last year
- A resource repository for machine unlearning in large language models☆509Updated 4 months ago
- ☆29Updated 2 years ago
- Implementation of "Federated Full-Parameter Tuning of Billion-Sized Language Models with Communication Cost under 18 Kilobytes" (https://…☆13Updated last year
- ☆112Updated last year
- An official implementation of "FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model", which has been accepted by KDD'2…☆58Updated 8 months ago
- Latest Advances on Federated LLM Learning☆78Updated 4 months ago
- This is a collection of research papers for Federated Learning for Large Language Models (FedLLM). And the repository will be continuousl…☆101Updated 4 months ago