vimar-gu / DiMLinks
Distilling Dataset into Generative Models
☆54Updated 2 years ago
Alternatives and similar repositories for DiM
Users that are interested in DiM are comparing it to the libraries listed below
Sorting:
- Efficient Dataset Distillation by Representative Matching☆113Updated last year
- This is a method of dataset condensation, and it has been accepted by CVPR-2022.☆72Updated 2 years ago
- ☆63Updated last year
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- Official Implementation of paper "Distilling Long-tailed Datasets" [CVPR 2025]☆18Updated 5 months ago
- ☆114Updated 2 years ago
- ☆89Updated 3 years ago
- Official Implementation of CVPR 2022 paper: "Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning…☆35Updated 2 years ago
- [CVPR23] "Understanding and Improving Visual Prompting: A Label-Mapping Perspective" by Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zha…☆53Updated 2 years ago
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆104Updated last year
- PyTorch implementation of paper "Dataset Distillation via Factorization" in NeurIPS 2022.☆67Updated 3 years ago
- Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need (IJCV 2024)☆153Updated last year
- Official repository of "Back to Source: Diffusion-Driven Test-Time Adaptation"☆85Updated 2 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆116Updated 2 years ago
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆80Updated 11 months ago
- [ICLR 2023 Spotlight] Divide to Adapt: Mitigating Confirmation Bias for Domain Adaptation of Black-Box Predictors☆39Updated 2 years ago
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆136Updated last year
- ☆71Updated 2 years ago
- ☆56Updated 2 years ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆48Updated 3 years ago
- ☆42Updated 2 years ago
- ☆30Updated last year
- [ICCV 2023] DataDAM: Efficient Dataset Distillation with Attention Matching☆33Updated last year
- PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"☆69Updated 4 years ago
- [ECCV 2022] A generalized long-tailed challenge that incorporates both the conventional class-wise imbalance and the overlooked attribute…