git-disl / recapLinks
Code for CVPR24 Paper - Resource-Efficient Transformer Pruning for Finetuning of Large Models
☆12Updated 2 months ago
Alternatives and similar repositories for recap
Users that are interested in recap are comparing it to the libraries listed below
Sorting:
- An Numpy and PyTorch Implementation of CKA-similarity with CUDA support☆94Updated 4 years ago
- FuseFL: One-Shot Federated Learning through the Lens of Causality with Progressive Model Fusion (NeurIPS 2024 Spotlight)☆13Updated 9 months ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆76Updated 3 years ago
- Awesome-Low-Rank-Adaptation☆126Updated last year
- [ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Hua…☆66Updated 2 years ago
- [ICLR 2023] Test-time Robust Personalization for Federated Learning☆55Updated 2 years ago
- Benchmark of robust self-supervised learning (RobustSSL) methods & Code for AutoLoRa (ICLR 2024).☆19Updated last month
- ☆35Updated last year
- [CVPR '24] Official implementation of the paper "Multiflow: Shifting Towards Task-Agnostic Vision-Language Pruning".☆23Updated 10 months ago
- Code for Adaptive Deep Neural Network Inference Optimization with EENet☆12Updated last year
- Implementation of the FedPM framework by the authors of the ICLR 2023 paper "Sparse Random Networks for Communication-Efficient Federated…☆29Updated 2 years ago
- [NeurIPS'23] FedL2P: Federated Learning to Personalize☆24Updated 2 months ago
- Reimplmentation of Visualizing the Loss Landscape of Neural Nets with PyTorch 1.8☆30Updated 3 years ago
- Prioritize Alignment in Dataset Distillation☆21Updated last year
- Code of Data-Free Knowledge Distillation via Feature Exchange and Activation Region Constraint☆21Updated 2 years ago
- ☆116Updated last year
- This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).☆82Updated 9 months ago
- [IJCAI-2021] Contrastive Model Inversion for Data-Free Knowledge Distillation☆73Updated 3 years ago
- Query-Efficient Data-Free Learning from Black-Box Models☆23Updated 2 years ago
- [AAAI, ICLR TP] Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening☆55Updated last year
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆46Updated last year
- ☆89Updated 2 years ago
- Official Repository for ResSFL (accepted by CVPR '22)☆26Updated 3 years ago
- Official Repository for MocoSFL (accepted by ICLR '23, notable 5%)☆53Updated 2 years ago
- [DMLR 2024] FedAIoT: A Federated Learning Benchmark for Artificial Intelligence of Things☆59Updated last year
- The official implement of paper "Does Federated Learning Really Need Backpropagation?"☆23Updated 2 years ago
- ☆24Updated 2 years ago
- PyTorch code for our paper "Resource-Adaptive Federated Learning with All-In-One Neural Composition" (NeurIPS2022)☆19Updated 3 years ago
- CVPR2023 - Rethinking Federated Learning with Domain Shift: A Prototype View☆112Updated last year
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago