wsqwsq / Towards-a-Defense-against-Backdoor-Attacks-in-Continual-Federated-LearningView external linksLinks
☆12Jan 28, 2023Updated 3 years ago
Alternatives and similar repositories for Towards-a-Defense-against-Backdoor-Attacks-in-Continual-Federated-Learning
Users that are interested in Towards-a-Defense-against-Backdoor-Attacks-in-Continual-Federated-Learning are comparing it to the libraries listed below
Sorting:
- [NeurIPS'22] Trap and Replace: Defending Backdoor Attacks by Trapping Them into an Easy-to-Replace Subnetwork. Haotao Wang, Junyuan Hong,…☆15Nov 27, 2023Updated 2 years ago
- FedDefender is a novel defense mechanism designed to safeguard Federated Learning from the poisoning attacks (i.e., backdoor attacks).☆15Jul 6, 2024Updated last year
- ☆19Dec 7, 2020Updated 5 years ago
- Adversarial attacks and defenses against federated learning.☆20May 24, 2023Updated 2 years ago
- Not All Poisons are Created Equal: Robust Training against Data Poisoning (ICML 2022)☆22Aug 8, 2022Updated 3 years ago
- ☆19Mar 6, 2023Updated 2 years ago
- TrojanZoo is a universal pytorch platform for conducting security researches (especially for backdoor attacks/defenses) for image classif…☆21Jan 7, 2021Updated 5 years ago
- 💉🔐 Novel algorithm for defending against Data Poisoning Attacks in a Federated Learning scenario☆23Apr 22, 2024Updated last year
- [ICLR 2023, Best Paper Award at ECCV’22 AROW Workshop] FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning☆60Dec 11, 2024Updated last year
- ☆25Nov 14, 2022Updated 3 years ago
- Code for ICLR 2023 Paper Better Generative Replay for Continual Federated Learning☆33Apr 23, 2023Updated 2 years ago
- Fighting Gradients with Gradients: Dynamic Defenses against Adversarial Attacks☆38May 25, 2021Updated 4 years ago
- Code repository for the ECCV 2022 (Oral) paper "Cartoon Explanations of Image Classifiers"☆10Nov 24, 2025Updated 2 months ago
- https://icml.cc/virtual/2023/poster/24354☆10Aug 15, 2023Updated 2 years ago
- using yolov5 to detect whether people fall down or not☆11May 9, 2023Updated 2 years ago
- [ICML 2023] Official code implementation of "Chameleon: Adapting to Peer Images for Planting Durable Backdoors in Federated Learning (htt…☆43Sep 9, 2025Updated 5 months ago
- Audio-only Emotion Detection using Federated Learning☆10Dec 8, 2022Updated 3 years ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- Minimum viable code for the Decodable Information Bottleneck paper. Pytorch Implementation.☆11Oct 20, 2020Updated 5 years ago
- Causal Reasoning for Membership Inference Attacks☆11Oct 21, 2022Updated 3 years ago
- Official implementation of Resisting Backdoor Attacks in Federated Learning via Bidirectional Elections and Individual Perspective☆13Sep 4, 2024Updated last year
- Official implement of ACL'25 Findings paper "MMUnlearner: Reformulating Multimodal Machine Unlearning in the Era of Multimodal Large Lang…☆19Jun 17, 2025Updated 8 months ago
- Integrated Tensorforce and OpenAI Gym to train SC II game agents.☆13Oct 26, 2019Updated 6 years ago
- Official Repository for Can Language Models be Instructed to Protect Personal Information?☆13Oct 8, 2023Updated 2 years ago
- This repository contains the artifacts accompanied by the paper "Fair Preprocessing"☆13Jul 20, 2021Updated 4 years ago
- Self-Teaching Notes on Gradient Leakage Attacks against GPT-2 models.☆14Mar 18, 2024Updated last year
- The implementatioin code of paper: “A Practical Clean-Label Backdoor Attack with Limited Information in Vertical Federated Learning”☆11Jul 1, 2023Updated 2 years ago
- Image recommendation service with image on the input that outputs most similar images from database.☆13Sep 19, 2020Updated 5 years ago
- ☆15Apr 4, 2024Updated last year
- ☆11Dec 22, 2021Updated 4 years ago
- Shadow Attack, LiRA, Quantile Regression and RMIA implementations in PyTorch (Online version)☆14Nov 8, 2024Updated last year
- Companion repository to "Prompt Compression and Contrastive Conditioning for Controllability and Toxicity Reduction in Language Models"☆14May 31, 2023Updated 2 years ago
- ☆10Apr 21, 2022Updated 3 years ago
- ☆12Sep 26, 2024Updated last year
- An unofficial pyotrch implementation of "ML-Leaks:Model and Data Independent Membership Inference Attacks and Defenses on ML Models"☆11Dec 23, 2023Updated 2 years ago
- Utility functions for weights and biases (wandb).☆11Sep 17, 2024Updated last year
- [CCS'24] Official Implementation of "Fisher Information guided Purification against Backdoor Attacks"☆14Oct 29, 2025Updated 3 months ago
- Code for "Can We Characterize Tasks Without Labels or Features?" (CVPR 2021)☆11Aug 31, 2021Updated 4 years ago
- [USENIX Security 2025] SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks☆19Sep 18, 2025Updated 5 months ago