OPTML-Group / DP4TLLinks
[NeurIPS2023] "Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning" by Yihua Zhang*, Yimeng Zhang*, Aochuan Chen*, Jinghan Jia, Jiancheng Liu, Gaowen Liu, Mingyi Hong, Shiyu Chang, Sijia Liu
☆14Updated 2 years ago
Alternatives and similar repositories for DP4TL
Users that are interested in DP4TL are comparing it to the libraries listed below
Sorting:
- ☆15Updated last year
- Official PyTorch implementation of "Multisize Dataset Condensation" (ICLR'24 Oral)☆15Updated last year
- (ICML 2023) Discover and Cure: Concept-aware Mitigation of Spurious Correlation☆43Updated 2 months ago
- AAAI 2024, M3D: Dataset Condensation by Minimizing Maximum Mean Discrepancy☆25Updated last year
- Official PyTorch implementation of "Loss-Curvature Matching for Dataset Selection and Condensation" (AISTATS 2023)☆22Updated 2 years ago
- ☆14Updated 2 years ago
- ☆30Updated last year
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆22Updated 3 years ago
- Respect to the input tensor instead of paramters of NN☆21Updated 3 years ago
- ☆91Updated 3 years ago
- [ICCV 2023] DataDAM: Efficient Dataset Distillation with Attention Matching☆33Updated last year
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆81Updated 11 months ago
- Official PyTorch implementation for Frequency Domain-based Dataset Distillation [NeurIPS 2023]☆30Updated last year
- ☆28Updated 2 years ago
- ☆26Updated 2 years ago
- ICLR 2022 (Spolight): Continual Learning With Filter Atom Swapping☆16Updated 2 years ago
- ☆41Updated 3 years ago
- [NeurIPS 2022] The official code for our NeurIPS 2022 paper "Inducing Neural Collapse in Imbalanced Learning: Do We Really Need a Learnab…☆49Updated 3 years ago
- [ICLR 2025] "Rethinking LLM Unlearning Objectives: A Gradient Perspective and Go Beyond"☆17Updated 11 months ago
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆25Updated last year
- [ICLR 2024] "Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality" by Xuxi Chen*, Yu Yang*, Zhangyang Wang, Baha…☆15Updated last year
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- official code repo for paper "Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging"☆22Updated 3 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆47Updated last year
- [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gao…☆83Updated last year
- Official Code for ICLR2022 Paper: Chaos is a Ladder: A New Theoretical Understanding of Contrastive Learning via Augmentation Overlap☆28Updated 4 months ago
- [NeurIPS 2023] Code release for "Going Beyond Linear Mode Connectivity: The Layerwise Linear Feature Connectivity"☆19Updated 2 years ago
- [SatML 2024] Shake to Leak: Fine-tuning Diffusion Models Can Amplify the Generative Privacy Risk☆16Updated 10 months ago
- [ICML 2023] "Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability"☆18Updated 2 years ago