HSG-AIML / SANELinks
Code Repository for the ICML 2024 paper: "Towards Scalable and Versatile Weight Space Learning".
☆29Updated last year
Alternatives and similar repositories for SANE
Users that are interested in SANE are comparing it to the libraries listed below
Sorting:
- Official implementation for 'Class-Balancing Diffusion Models'☆56Updated last year
- ☆114Updated 2 years ago
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆104Updated last year
- A PyTorch implementation of Centered Kernel Alignment (CKA) with GPU acceleration.☆57Updated 2 years ago
- [ICLR 2024 Spotlight] Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization☆34Updated last year
- [CVPR2024 highlight] Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching (G-VBSM)☆28Updated last year
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆81Updated 11 months ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆48Updated 3 years ago
- Efficient Dataset Distillation by Representative Matching☆113Updated last year
- A pytorch implementation of CVPR24 paper "D4M: Dataset Distillation via Disentangled Diffusion Model"☆38Updated last year
- Official code for our CVPR '22 paper "Dataset Distillation by Matching Training Trajectories"☆436Updated last year
- Give us minutes, we give back a faster Mamba. The official implementation of "Faster Vision Mamba is Rebuilt in Minutes via Merged Token …☆40Updated last year
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆105Updated last year
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆116Updated 2 years ago
- Awesome papers on weight-space learning☆35Updated 3 weeks ago
- source code for NeurIPS'23 paper "Dream the Impossible: Outlier Imagination with Diffusion Models"☆72Updated 9 months ago
- Distilling Dataset into Generative Models☆54Updated 2 years ago
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆136Updated last year
- Official implementation for Equivariant Architectures for Learning in Deep Weight Spaces [ICML 2023]☆90Updated 2 years ago
- Data distillation benchmark☆72Updated 7 months ago
- Prioritize Alignment in Dataset Distillation☆21Updated last year
- SparCL: Sparse Continual Learning on the Edge @ NeurIPS 22☆30Updated 2 years ago
- ☆106Updated last year
- Code release for "Generative Modeling of Weights: Generalization or Memorization?"☆19Updated 8 months ago
- Diffusion Classifier leverages pretrained diffusion models to perform zero-shot classification without additional training☆486Updated last year
- Code release for ICLR 2024 paper: Exploring Diffusion Time-steps for Unsupervised Representation Learning☆34Updated last year
- Neural-etwork-parameters-with-Diffusion☆37Updated last year
- ☆41Updated 3 years ago
- ☆63Updated last year
- Dataset Quantization with Active Learning based Adaptive Sampling [ECCV 2024]☆10Updated last year