OngWinKent / Federated-Feature-Unlearning
[NeurIPS 2024] Official implementation of the paper “Ferrari: Federated Feature Unlearning via Optimizing Feature Sensitivity"
☆9Updated this week
Alternatives and similar repositories for Federated-Feature-Unlearning:
Users that are interested in Federated-Feature-Unlearning are comparing it to the libraries listed below
- [CVPR 2023] Backdoor Defense via Adaptively Splitting Poisoned Dataset☆46Updated 11 months ago
- [ICLR2024] "Backdoor Federated Learning by Poisoning Backdoor-Critical Layers"☆30Updated 3 months ago
- Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Model…☆39Updated 4 months ago
- ☆60Updated 5 months ago
- [ECCV-2024] Transferable Targeted Adversarial Attack, CLIP models, Generative adversarial network, Multi-target attacks☆31Updated 7 months ago
- [ICML 2022 / ICLR 2024] Source code for our papers "Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks" and "Be C…☆40Updated 7 months ago
- [CVPR 2024] Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers☆17Updated 4 months ago
- [BMVC 2023] Backdoor Attack on Hash-based Image Retrieval via Clean-label Data Poisoning☆15Updated last year
- ☆29Updated 2 years ago
- [ICML2023] Revisiting Data-Free Knowledge Distillation with Poisoned Teachers☆23Updated 8 months ago
- A Fine-grained Differentially Private Federated Learning against Leakage from Gradients☆13Updated 2 years ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆42Updated last month
- The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Uns…☆71Updated 2 weeks ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆30Updated last year
- ☆26Updated 3 months ago
- [NeurIPS 2024 D&B Track] UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models by Yihua Zhang, Cho…☆64Updated 4 months ago
- ☆13Updated 8 months ago
- ☆58Updated 2 years ago
- ☆30Updated 3 years ago
- A pytorch implementation of the paper "Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage".☆57Updated 2 years ago
- This is a collection of awesome papers I have read (carefully or roughly) in the fields of security in diffusion models. Any suggestions …☆24Updated 4 months ago
- Official codes for "Understanding Deep Gradient Leakage via Inversion Influence Functions", NeurIPS 2023☆15Updated last year
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆19Updated 11 months ago
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆22Updated last year
- official implementation of Towards Robust Model Watermark via Reducing Parametric Vulnerability☆13Updated 9 months ago
- ☆12Updated 2 weeks ago
- [ACM Computing Survey 2025] Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey, by MARS Group at Wuhan Univ…☆15Updated 9 months ago
- Query-Efficient Data-Free Learning from Black-Box Models☆22Updated last year
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆24Updated 4 months ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆46Updated 2 months ago