git-disl / recapLinks
Code for CVPR24 Paper - Resource-Efficient Transformer Pruning for Finetuning of Large Models
☆12Updated 3 months ago
Alternatives and similar repositories for recap
Users that are interested in recap are comparing it to the libraries listed below
Sorting:
- An Numpy and PyTorch Implementation of CKA-similarity with CUDA support☆94Updated 4 years ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆76Updated 3 years ago
- [ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Hua…☆67Updated 2 years ago
- Awesome-Low-Rank-Adaptation☆128Updated last year
- Implementation of the FedPM framework by the authors of the ICLR 2023 paper "Sparse Random Networks for Communication-Efficient Federated…☆30Updated 2 years ago
- Prioritize Alignment in Dataset Distillation☆21Updated last year
- [CVPR '24] Official implementation of the paper "Multiflow: Shifting Towards Task-Agnostic Vision-Language Pruning".☆23Updated 11 months ago
- [AAAI, ICLR TP] Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening☆56Updated last year
- [ICLR 2023] Test-time Robust Personalization for Federated Learning☆55Updated 2 years ago
- Awesome Pruning. ✅ Curated Resources for Neural Network Pruning.☆173Updated last year
- Official Repository for MocoSFL (accepted by ICLR '23, notable 5%)☆53Updated 2 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆31Updated 3 years ago
- Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples [NeurIPS 2021]☆34Updated 4 years ago
- Code for Adaptive Deep Neural Network Inference Optimization with EENet☆12Updated last year
- In progress.☆68Updated last year
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆73Updated 3 years ago
- ☆89Updated 3 years ago
- The official implementation of TinyTrain [ICML '24]☆24Updated last year
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆45Updated 2 years ago
- This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).☆82Updated 10 months ago
- Official PyTorch implementation of "Dataset Condensation via Efficient Synthetic-Data Parameterization" (ICML'22)☆116Updated 2 years ago
- PyTorch code for our paper "Resource-Adaptive Federated Learning with All-In-One Neural Composition" (NeurIPS2022)☆19Updated 3 years ago
- [TPAMI 2023] Low Dimensional Landscape Hypothesis is True: DNNs can be Trained in Tiny Subspaces☆43Updated 3 years ago
- ☆117Updated last year
- ☆15Updated last year
- [DMLR 2024] FedAIoT: A Federated Learning Benchmark for Artificial Intelligence of Things☆59Updated last year
- A PyTorch implementation of Centered Kernel Alignment (CKA) with GPU acceleration.☆57Updated 2 years ago
- ☆24Updated 2 years ago
- [ICLR 2023] Pruning Deep Neural Networks from a Sparsity Perspective☆25Updated 2 years ago
- Reimplmentation of Visualizing the Loss Landscape of Neural Nets with PyTorch 1.8☆31Updated 3 years ago