HSG-AIML / SANE
Code Repository for the ICML 2024 paper: "Towards Scalable and Versatile Weight Space Learning".
☆20Updated 7 months ago
Alternatives and similar repositories for SANE:
Users that are interested in SANE are comparing it to the libraries listed below
- ☆113Updated last year
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆112Updated last year
- ☆35Updated 2 years ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆47Updated 2 years ago
- Official implementation for 'Class-Balancing Diffusion Models'☆53Updated 11 months ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆102Updated 11 months ago
- Code Repository for the NeurIPS 2021 paper: "Self-Supervised Representation Learning on Neural Network Weights for Model Characteristic P…☆19Updated 9 months ago
- Official implementation for Equivariant Architectures for Learning in Deep Weight Spaces [ICML 2023]☆89Updated last year
- [ICLR 2024 Spotlight] Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalization☆30Updated 6 months ago
- ☆39Updated 2 years ago
- A PyTorch implementation of Centered Kernel Alignment (CKA) with GPU acceleration.☆44Updated last year
- source code for NeurIPS'23 paper "Dream the Impossible: Outlier Imagination with Diffusion Models"☆68Updated last week
- ☆16Updated 10 months ago
- ☆45Updated 2 years ago
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆90Updated last year
- SparCL: Sparse Continual Learning on the Edge @ NeurIPS 22☆29Updated last year
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- Metrics for "Beyond neural scaling laws: beating power law scaling via data pruning " (NeurIPS 2022 Outstanding Paper Award)☆56Updated 2 years ago
- [ICCV 2023] DataDAM: Efficient Dataset Distillation with Attention Matching☆33Updated 10 months ago
- ☆56Updated 3 months ago
- Prioritize Alignment in Dataset Distillation☆20Updated 4 months ago
- This is a method of dataset condensation, and it has been accepted by CVPR-2022.☆69Updated last year
- Respect to the input tensor instead of paramters of NN☆18Updated 2 years ago
- Model Zoos published at the NeurIPS 2022 Dataset & Benchmark track: "Model Zoos: A Dataset of Diverse Populations of Neural Network Model…☆55Updated last year
- ☆86Updated 2 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- Efficient Dataset Distillation by Representative Matching☆113Updated last year
- Official repository of "Back to Source: Diffusion-Driven Test-Time Adaptation"☆77Updated last year
- Code Repository for the NeurIPS 2022 paper: "Hyper-Representations as Generative Models: Sampling Unseen Neural Network Weights".☆16Updated 9 months ago
- [AAAI, ICLR TP] Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening☆46Updated 7 months ago