zju-SWJ / RLD
Official implementation for "Knowledge Distillation with Refined Logits".
☆13Updated 8 months ago
Alternatives and similar repositories for RLD:
Users that are interested in RLD are comparing it to the libraries listed below
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆63Updated 7 months ago
- Official code for Scale Decoupled Distillation☆41Updated last year
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Updated last year
- This repo contains the source code for VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks (NeurIPS 2024).☆37Updated 6 months ago
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation. NeurIPS 2022.☆32Updated 2 years ago
- This is the official code for paper: Token Summarisation for Efficient Vision Transformers via Graph-based Token Propagation☆27Updated last year
- ☆26Updated last year
- Official implementation of NeurIPS 2024 "Visual Fourier Prompt Tuning"☆26Updated 3 months ago
- BESA is a differentiable weight pruning technique for large language models.☆16Updated last year
- [NeurIPS2023]Lightweight Vision Transformer with Bidirectional Interaction