☆38Nov 24, 2021Updated 4 years ago
Alternatives and similar repositories for Fast-Machine-Unlearning
Users that are interested in Fast-Machine-Unlearning are comparing it to the libraries listed below
Sorting:
- Official repo of the paper Zero-Shot Machine Unlearning accepted in IEEE Transactions on Information Forensics and Security☆51May 19, 2023Updated 2 years ago
- Code for CVPR22 paper "Deep Unlearning via Randomized Conditionally Independent Hessians"☆25Jul 9, 2022Updated 3 years ago
- Methods for removing learned data from neural nets and evaluation of those methods☆38Nov 26, 2020Updated 5 years ago
- Code for the paper "Overconfidence is a Dangerous Thing: Mitigating Membership Inference Attacks by Enforcing Less Confident Prediction" …☆12Sep 6, 2023Updated 2 years ago
- ☆199Sep 22, 2023Updated 2 years ago
- Camouflage poisoning via machine unlearning☆19Jul 3, 2025Updated 8 months ago
- Awesome Machine Unlearning (A Survey of Machine Unlearning)☆936Feb 28, 2026Updated 2 weeks ago
- ☆22Dec 17, 2025Updated 3 months ago
- ☆53Aug 17, 2024Updated last year
- [AAAI, ICLR TP] Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening☆56Sep 11, 2024Updated last year
- GTAR l2l generator unlearning project☆17May 6, 2024Updated last year
- ☆60Jun 17, 2020Updated 5 years ago
- ☆17Feb 17, 2024Updated 2 years ago
- code release for "Unrolling SGD: Understanding Factors Influencing Machine Unlearning" published at EuroS&P'22☆25Mar 13, 2022Updated 4 years ago
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆84Feb 28, 2026Updated 2 weeks ago
- Official Repository for ICML 2023 paper "Can Neural Network Memorization Be Localized?"☆21Oct 26, 2023Updated 2 years ago
- Certified Removal from Machine Learning Models☆69Aug 23, 2021Updated 4 years ago
- Source code of NAACL 2025 Findings "Scaling Up Membership Inference: When and How Attacks Succeed on Large Language Models"☆15Dec 16, 2025Updated 3 months ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆27Nov 18, 2024Updated last year
- ☆14May 8, 2024Updated last year
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Aug 9, 2023Updated 2 years ago
- Code for paper "Membership Inference Attacks Against Vision-Language Models"☆27Jan 25, 2025Updated last year
- Existing Literature about Machine Unlearning☆954Aug 29, 2025Updated 6 months ago
- Code for ECCV 2022 paper “Learning with Recoverable Forgetting”☆21Jul 27, 2022Updated 3 years ago
- ☆15Apr 7, 2023Updated 2 years ago
- ☆22Dec 22, 2024Updated last year
- Awesome Federated Unlearning (FU) Papers (Continually Update)☆112Apr 24, 2024Updated last year
- ☆31Oct 7, 2021Updated 4 years ago
- The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks☆17Jul 4, 2023Updated 2 years ago
- ☆18Jul 20, 2022Updated 3 years ago
- Starting kit for the NeurIPS 2023 unlearning challenge☆376Sep 2, 2023Updated 2 years ago
- ☆10Jun 13, 2021Updated 4 years ago
- [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable☆171Jul 5, 2024Updated last year
- This is an implementation demo of the ICLR 2021 paper [Neural Attention Distillation: Erasing Backdoor Triggers from Deep Neural Networks…☆128Jan 18, 2022Updated 4 years ago
- ☆14Feb 24, 2020Updated 6 years ago
- Knowledge distillation (KD) from a decision-based black-box (DB3) teacher without training data.☆22May 3, 2022Updated 3 years ago
- [ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?☆43Sep 4, 2024Updated last year
- ☆14Aug 17, 2024Updated last year
- ☆20Jun 21, 2019Updated 6 years ago