zju-SWJ / RLDLinks
Official implementation for "Knowledge Distillation with Refined Logits".
☆14Updated 10 months ago
Alternatives and similar repositories for RLD
Users that are interested in RLD are comparing it to the libraries listed below
Sorting:
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Updated last year
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆65Updated 9 months ago
- ☆26Updated last year
- Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation. NeurIPS 2022.☆32Updated 2 years ago
- The codebase for paper "PPT: Token Pruning and Pooling for Efficient Vision Transformer"☆23Updated 7 months ago
- CVPR 2023, Class Attention Transfer Based Knowledge Distillation☆44Updated 2 years ago
- The official implementation of LumiNet: The Bright Side of Perceptual Knowledge Distillation https://arxiv.org/abs/2310.03669☆19Updated last year
- ☆27Updated 2 years ago
- The official project website of "Small Scale Data-Free Knowledge Distillation" (SSD-KD for short, published in CVPR 2024).☆18Updated last year
- This is the official code for paper: Token Summarisation for Efficient Vision Transformers via Graph-based Token Propagation☆29Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆17Updated last year
- Official code for Scale Decoupled Distillation☆41Updated last year
- ☆45Updated last year
- Official repository of our work "Finding Lottery Tickets in Vision Models via Data-driven Spectral Foresight Pruning" accepted at CVPR 20…☆24Updated 4 months ago
- [CVPR 2025] DivPrune: Diversity-based Visual Token Pruning for Large Multimodal Models☆28Updated 3 weeks ago
- ☆12Updated last year
- 🔥 🔥 [WACV2024] Mini but Mighty: Finetuning ViTs with Mini Adapters☆20Updated 11 months ago
- PELA: Learning Parameter-Efficient Models with Low-Rank Approximation [CVPR 2024]☆17Updated last year
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆52Updated 3 months ago
- A token pruning method that accelerates ViTs for various tasks while maintaining high performance.☆14Updated 5 months ago
- [BMVC 2022] Information Theoretic Representation Distillation☆18Updated last year
- Official implementation of NeurIPS 2024 "Visual Fourier Prompt Tuning"☆28Updated 5 months ago
- ☆22Updated 3 years ago
- Code for Learned Thresholds Token Merging and Pruning for Vision Transformers (LTMP). A technique to reduce the size of Vision Transforme…☆16Updated 7 months ago
- ☆19Updated 3 years ago
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- Low-Rank Rescaled Vision Transformer Fine-Tuning: A Residual Design Approach, CVPR 2024☆22Updated 11 months ago
- [IJCV2025] https://arxiv.org/abs/2304.04521☆14Updated 5 months ago
- [CVPR 2024] VkD : Improving Knowledge Distillation using Orthogonal Projections☆53Updated 8 months ago
- [NeurIPS'22] Projector Ensemble Feature Distillation☆29Updated last year