NUS-HPC-AI-Lab / InfoGrowthLinks
Efficient and Online Dataset Growth Algorithm (with cleanness and diversity awareness) to deal with growing web data
☆21Updated last year
Alternatives and similar repositories for InfoGrowth
Users that are interested in InfoGrowth are comparing it to the libraries listed below
Sorting:
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- [ICCV 2023] DataDAM: Efficient Dataset Distillation with Attention Matching☆33Updated last year
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆105Updated last year
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆81Updated 11 months ago
- Code for ECCV 2022 paper “Learning with Recoverable Forgetting”☆21Updated 3 years ago
- Official Implementation of paper "Distilling Long-tailed Datasets" [CVPR 2025]☆19Updated 5 months ago
- Prioritize Alignment in Dataset Distillation☆21Updated last year
- Official PyTorch implementation of "Multisize Dataset Condensation" (ICLR'24 Oral)☆15Updated last year
- Data distillation benchmark☆72Updated 7 months ago
- Code for our ICML'24 on multimodal dataset distillation☆43Updated last year
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆104Updated last year
- [CVPR2024 highlight] Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching (G-VBSM)☆28Updated last year
- ☆31Updated 2 years ago
- A pytorch implementation of CVPR24 paper "D4M: Dataset Distillation via Disentangled Diffusion Model"☆38Updated last year
- Elucidated Dataset Condensation (NeurIPS 2024)☆20Updated last year
- [NeurIPS2024] An official pytorch implement of the paper: BoostAdapter: Improving Test-Time Adaptation via Regional Bootstrapping☆18Updated 10 months ago
- ☆63Updated last year
- ☆15Updated last year
- Official code base for "Long-Tailed Diffusion Models With Oriented Calibration" ICLR2024☆15Updated last year
- Respect to the input tensor instead of paramters of NN☆21Updated 3 years ago
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆136Updated last year
- AAAI 2024, M3D: Dataset Condensation by Minimizing Maximum Mean Discrepancy☆25Updated last year
- (ICLR 2025 Spotlight) DEEM: Official implementation of Diffusion models serve as the eyes of large language models for image perception.☆48Updated 7 months ago
- Distilling Dataset into Generative Models☆54Updated 2 years ago
- Code for CVPR 2024 Oral "Neural Lineage"☆17Updated last year
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆48Updated 3 years ago
- This is a method of dataset condensation, and it has been accepted by CVPR-2022.☆72Updated 2 years ago
- Official PyTorch Code for "Is Synthetic Data From Diffusion Models Ready for Knowledge Distillation?" (https://arxiv.org/abs/2305.12954)☆49Updated 2 years ago
- ECCV24, NeurIPS24, Benchmarking Generalized Out-of-Distribution Detection with Vision-Language Models☆29Updated 2 weeks ago
- ☆23Updated last year