git-disl / recapLinks
Code for CVPR24 Paper - Resource-Efficient Transformer Pruning for Finetuning of Large Models
☆12Updated last month
Alternatives and similar repositories for recap
Users that are interested in recap are comparing it to the libraries listed below
Sorting:
- An Numpy and PyTorch Implementation of CKA-similarity with CUDA support☆94Updated 4 years ago
- [ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Hua…☆66Updated 2 years ago
- Awesome-Low-Rank-Adaptation☆124Updated last year
- Awesome Pruning. ✅ Curated Resources for Neural Network Pruning.☆173Updated last year
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆75Updated 3 years ago
- In progress.☆67Updated last year
- Benchmark of robust self-supervised learning (RobustSSL) methods & Code for AutoLoRa (ICLR 2024).☆19Updated last week
- [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Di…☆67Updated last year
- This is the official implementation of NNSplitter (ICML'23)☆12Updated last year
- Reimplmentation of Visualizing the Loss Landscape of Neural Nets with PyTorch 1.8☆30Updated 3 years ago
- PyTorch code for our paper "Resource-Adaptive Federated Learning with All-In-One Neural Composition" (NeurIPS2022)☆19Updated 3 years ago
- Prioritize Alignment in Dataset Distillation☆20Updated last year
- [CVPR '24] Official implementation of the paper "Multiflow: Shifting Towards Task-Agnostic Vision-Language Pruning".☆23Updated 9 months ago
- [ICLR 2023] Test-time Robust Personalization for Federated Learning☆54Updated 2 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆31Updated 3 years ago
- The official implementation of TinyTrain [ICML '24]☆23Updated last year
- [TPAMI 2023] Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces☆42Updated 3 years ago
- Code of Data-Free Knowledge Distillation via Feature Exchange and Activation Region Constraint☆21Updated 2 years ago
- [NeurIPS 2024] AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models☆29Updated 6 months ago
- ☆89Updated 2 years ago
- ☆35Updated last year
- ☆36Updated 3 years ago
- ☆61Updated last year
- Efficient Dataset Distillation by Representative Matching☆113Updated last year
- Official implementation of "Modeling Multi-Task Model Merging as Adaptive Projective Gradient Descent".☆22Updated 6 months ago
- ☆32Updated 3 years ago
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆73Updated 3 years ago
- ☆20Updated 3 years ago
- [IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.☆59Updated 2 years ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆105Updated last year