MadryLab / D3MLinks
Debiasing Through Data Attribution
☆12Updated last year
Alternatives and similar repositories for D3M
Users that are interested in D3M are comparing it to the libraries listed below
Sorting:
- What do we learn from inverting CLIP models?☆58Updated last year
- Repository for research works and resources related to model reprogramming <https://arxiv.org/abs/2202.10629>☆64Updated 4 months ago
- Code for the paper "Evading Black-box Classifiers Without Breaking Eggs" [SaTML 2024]☆21Updated last year
- Host CIFAR-10.2 Data Set☆13Updated 4 years ago
- ☆24Updated last year
- This is an official repository for "LAVA: Data Valuation without Pre-Specified Learning Algorithms" (ICLR2023).☆52Updated last year
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆82Updated last year
- Official code for the ICCV2023 paper ``One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training''☆20Updated 2 years ago
- [SatML 2024] Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk☆16Updated 10 months ago
- ☆23Updated last year
- ☆24Updated last year
- Intriguing Properties of Data Attribution on Diffusion Models (ICLR 2024)☆37Updated 2 years ago
- ☆18Updated last year
- Official Repository for Dataset Inference for LLMs☆43Updated last year
- [NeurIPS 2023] Differentially Private Image Classification by Learning Priors from Random Processes☆12Updated 2 years ago
- NeurIPS'24 - LLM Safety Landscape☆39Updated 3 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆47Updated last year
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆25Updated last month
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated 2 years ago
- [ICML 2023] "Robust Weight Signatures: Gaining Robustness as Easy as Patching Weights?" by Ruisi Cai, Zhenyu Zhang, Zhangyang Wang☆16Updated 2 years ago
- Privacy backdoors☆50Updated last year
- Official Pytorch repo of CVPR'23 and NeurIPS'23 papers on understanding replication in diffusion models.☆113Updated 2 years ago
- Implementation of the paper "Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing".☆10Updated 2 years ago
- ☆16Updated 10 months ago
- Code relative to "Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers"☆19Updated 3 years ago
- Fine-tuning-free Shapley value (FreeShap) for instance attribution☆14Updated last year
- [NeurIPS2023] "Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning" by Yihua Zhang*, Yimeng Zhang*,…☆14Updated 2 years ago
- The official implement of paper "Does Federated Learning Really Need Backpropagation?"☆23Updated 2 years ago
- ☆20Updated 2 years ago
- Official code for "Evaluations of Machine Learning Privacy Defenses are Misleading" (https://arxiv.org/abs/2404.17399)☆11Updated last year