cywinski / guideLinks
☆13Updated last year
Alternatives and similar repositories for guide
Users that are interested in guide are comparing it to the libraries listed below
Sorting:
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆104Updated last year
- Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality (NeurIPS 2023, Spotlight)☆90Updated last year
- [CVPR2024 highlight] Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching (G-VBSM)☆28Updated last year
- Everything to the Synthetic: Diffusion-driven Test-time Adaptation via Synthetic-Domain Alignment, arXiv 2024 / CVPR 2025☆40Updated 11 months ago
- Code for ICML 2024 paper (Oral) — Test-Time Model Adaptation with Only Forward Passes☆95Updated last year
- Official implementation of "Mixture of Experts Meets Prompt-Based Continual Learning" (NeurIPS 2024)☆41Updated 6 months ago
- Code for ICML2023 paper, DDGR: Continual Learning with Deep Diffusion-based Generative Replay.☆39Updated 2 years ago
- source code for NeurIPS'23 paper "Dream the Impossible: Outlier Imagination with Diffusion Models"☆72Updated 9 months ago
- [ECCV 2024] MagMax: Leveraging Model Merging for Seamless Continual Learning (official repository)☆29Updated last year
- Consistent Prompting for Rehearsal-Free Continual Learning [CVPR2024]☆36Updated 7 months ago
- Efficient Dataset Distillation by Representative Matching☆113Updated last year
- Practical Continual Forgetting for Pre-trained Vision Models (CVPR 2024; T-PAMI 2026)☆70Updated 3 weeks ago
- Code for our ICML'24 on multimodal dataset distillation☆43Updated last year
- [CVPR2025] The implementation of the paper "OODD: Test-time Out-of-Distribution Detection with Dynamic Dictionary".☆18Updated 9 months ago
- [ICLR 2024] Real-Fake: Effective Training Data Synthesis Through Distribution Matching☆78Updated 2 years ago
- Collection of awesome Continual Test-Time Adaptation methods☆24Updated last year
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆81Updated 11 months ago
- Revisiting Class-Incremental Learning with Pre-Trained Models: Generalizability and Adaptivity are All You Need (IJCV 2024)☆153Updated last year
- ☆17Updated last year
- A pytorch implementation of CVPR24 paper "D4M: Dataset Distillation via Disentangled Diffusion Model"☆38Updated last year
- A Comprehensive Survey on Continual Learning in Generative Models.☆116Updated last week
- Learning without Forgetting for Vision-Language Models (TPAMI 2025)☆58Updated 7 months ago
- A paper list of our recent survey on continual learning, and other useful resources in this field.☆103Updated last year
- Awesome Low-Rank Adaptation☆59Updated 6 months ago
- Official code for ICLR 2024 paper, "A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation"☆85Updated last year
- ☆63Updated last year
- Diffusion-TTA improves pre-trained discriminative models such as image classifiers or segmentors using pre-trained generative models.☆80Updated last year
- Data distillation benchmark☆72Updated 7 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆76Updated 11 months ago
- ☆16Updated last year