git-disl / recap
Code for CVPR24 Paper - Resource-Efficient Transformer Pruning for Finetuning of Large Models
☆11Updated 10 months ago
Alternatives and similar repositories for recap:
Users that are interested in recap are comparing it to the libraries listed below
- FuseFL: One-Shot Federated Learning through the Lens of Causality with Progressive Model Fusion (NeurIPS 2024 Spotlight)☆12Updated 3 weeks ago
- An Numpy and PyTorch Implementation of CKA-similarity with CUDA support☆90Updated 3 years ago
- ☆14Updated 2 years ago
- [NeurIPS'23] FedL2P: Federated Learning to Personalize☆21Updated 9 months ago
- Code for Adaptive Deep Neural Network Inference Optimization with EENet☆11Updated last year
- Prioritize Alignment in Dataset Distillation☆20Updated 4 months ago
- Implementation of the FedPM framework by the authors of the ICLR 2023 paper "Sparse Random Networks for Communication-Efficient Federated…☆28Updated 2 years ago
- ☆86Updated 2 years ago
- [ICLR 2023] Test-time Robust Personalization for Federated Learning☆54Updated last year
- Awesome-Low-Rank-Adaptation☆94Updated 6 months ago
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- The official implementation of TinyTrain [ICML '24]☆22Updated 9 months ago
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Di…☆57Updated 6 months ago
- ☆26Updated 2 years ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 — Carrying out CNN Channel Pruning in a White Box☆18Updated 3 years ago
- Official Repository for MocoSFL (accepted by ICLR '23, notable 5%)☆52Updated 2 years ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆102Updated 11 months ago
- [ICCV 2023] DataDAM: Efficient Dataset Distillation with Attention Matching☆33Updated 10 months ago
- Elucidated Dataset Condensation (NeurIPS 2024)☆21Updated 6 months ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆69Updated 2 years ago
- [NeurIPS 2024] AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models☆22Updated 3 weeks ago
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆69Updated 2 months ago
- Benchmark of robust self-supervised learning (RobustSSL) methods & Code for AutoLoRa (ICLR 2024).☆16Updated 10 months ago
- Federated Dynamic Sparse Training☆30Updated 2 years ago
- [NeurIPS 2023] ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer☆32Updated last year
- Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples [NeurIPS 2021]☆31Updated 3 years ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆112Updated last year
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- [ICLR2023] Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning (https://arxiv.org/abs/2210.0022…☆40Updated 2 years ago
- Reimplmentation of Visualizing the Loss Landscape of Neural Nets with PyTorch 1.8☆27Updated 2 years ago