princetonvisualai / RememberThePast-DatasetDistillation
☆38Updated 2 years ago
Alternatives and similar repositories for RememberThePast-DatasetDistillation:
Users that are interested in RememberThePast-DatasetDistillation are comparing it to the libraries listed below
- This repository is the official implementation of Dataset Condensation with Contrastive Signals (DCC), accepted at ICML 2022.☆21Updated 2 years ago
- Official Code for Dataset Distillation using Neural Feature Regression (NeurIPS 2022)☆47Updated 2 years ago
- ☆86Updated 2 years ago
- Code for the paper "Efficient Dataset Distillation using Random Feature Approximation"☆37Updated 2 years ago
- Metrics for "Beyond neural scaling laws: beating power law scaling via data pruning " (NeurIPS 2022 Outstanding Paper Award)☆55Updated last year
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆112Updated last year
- ICLR 2022 (Spolight): Continual Learning With Filter Atom Swapping☆16Updated last year
- Code for NeurIPS 2021 paper "Flattening Sharpness for Dynamic Gradient Projection Memory Benefits Continual Learning".☆16Updated 3 years ago
- ☆16Updated 10 months ago
- ☆23Updated last year
- ☆42Updated last year
- (Pytorch) Training ResNets on ImageNet-100 data☆56Updated 3 years ago
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆28Updated last year
- Official implementation of "Private Set Generation with Discriminative Information" (NeurIPS 2022)☆17Updated last year
- [ICLR 2022] "Deep Ensembling with No Overhead for either Training or Testing: The All-Round Blessings of Dynamic Sparsity" by Shiwei Liu,…☆27Updated 2 years ago
- ☆38Updated 4 months ago
- This repository is the official implementation of Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regulari…☆21Updated 2 years ago
- ☆54Updated 3 months ago
- [NeurIPS 2024] BLoB: Bayesian Low-Rank Adaptation by Backpropagation for Large Language Models☆25Updated 2 months ago
- SparCL: Sparse Continual Learning on the Edge @ NeurIPS 22☆29Updated last year
- On the Importance of Gradients for Detecting Distributional Shifts in the Wild☆55Updated 2 years ago
- ☆28Updated 11 months ago
- ☆113Updated last year
- ☆26Updated last year
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- This repo implements the CVPR23 paper Trainable Projected Gradient Method for Robust Fine-tuning☆24Updated last year
- [ICCV 2023] DataDAM: Efficient Dataset Distillation with Attention Matching☆33Updated 9 months ago
- ☆48Updated 2 years ago
- Weight-Averaged Sharpness-Aware Minimization (NeurIPS 2022)☆28Updated 2 years ago
- ☆12Updated last year