insait-institute / dager-gradient-inversionLinks
Code for the NeurIPS 2024 submission: "DAGER: Extracting Text from Gradients with Language Model Priors"
☆18Updated 3 months ago
Alternatives and similar repositories for dager-gradient-inversion
Users that are interested in dager-gradient-inversion are comparing it to the libraries listed below
Sorting:
- An official implementation of "FedBiOT: LLM Local Fine-tuning in Federated Learning without Full Model", which has been accepted by KDD'2…☆58Updated 8 months ago
- ☆112Updated last year
- FLPoison: Benchmarking Poisoning Attacks and Defenses in Federated Learning☆40Updated 2 months ago
- Latest Advances on Federated LLM Learning☆78Updated 4 months ago
- ☆361Updated 2 weeks ago
- (ACL 2025 - Oral) FedEx-LoRA: Exact Aggregation for Federated and Efficient Fine-Tuning of Foundation Models☆28Updated last month
- ✨✨A curated list of latest advances on Large Foundation Models with Federated Learning☆139Updated this week
- The code for paper "The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation (RAG)", exploring the privacy risk o…☆58Updated 9 months ago
- This is a collection of research papers for Federated Learning for Large Language Models (FedLLM). And the repository will be continuousl…☆101Updated 4 months ago
- ☆65Updated 2 years ago
- [TDSC 2024] Official code for our paper "FedTracker: Furnishing Ownership Verification and Traceability for Federated Learning Model"☆21Updated 6 months ago
- Awesome Federated Unlearning (FU) Papers (Continually Update)☆106Updated last year
- Composite Backdoor Attacks Against Large Language Models☆20Updated last year
- A curated list of papers & resources linked to data poisoning, backdoor attacks and defenses against them (no longer maintained)☆280Updated 10 months ago
- ☆34Updated last year
- ☆99Updated 10 months ago
- Source code and scripts for the paper "Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks"☆20Updated 11 months ago
- ☆55Updated 2 years ago
- Self-Teaching Notes on Gradient Leakage Attacks against GPT-2 models.☆14Updated last year
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆42Updated 2 months ago
- Code related to the paper "Machine Unlearning of Features and Labels"☆72Updated last year
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆48Updated 11 months ago
- Federated Learning in CVPR2024☆19Updated last year
- ☆26Updated last year
- IBA: Towards Irreversible Backdoor Attacks in Federated Learning (Poster at NeurIPS 2023)☆38Updated 2 months ago
- A survey on harmful fine-tuning attack for large language model☆221Updated last week
- [arXiv:2411.10023] "Model Inversion Attacks: A Survey of Approaches and Countermeasures"☆208Updated 5 months ago
- ☆29Updated 2 years ago
- [NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models