Gumpest / AvatarKDLinks
[ACM MM'23] Official implementation of paper "Avatar Knowledge Distillation: Self-ensemble Teacher Paradigm with Uncertainty".
☆14Updated 2 years ago
Alternatives and similar repositories for AvatarKD
Users that are interested in AvatarKD are comparing it to the libraries listed below
Sorting:
- PyTorch code and checkpoints release for VanillaKD: https://arxiv.org/abs/2305.15781☆76Updated 2 years ago
- Training ImageNet / CIFAR models with sota strategies and fancy techniques such as ViT, KD, Rep, etc.☆87Updated last year
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Updated 2 years ago
- Official implementation of paper "Masked Distillation with Receptive Tokens", ICLR 2023.☆71Updated 2 years ago
- ☆78Updated 2 years ago
- (AAAI 2023 Oral) Pytorch implementation of "CF-ViT: A General Coarse-to-Fine Method for Vision Transformer"☆106Updated 2 years ago
- A Close Look at Spatial Modeling: From Attention to Convolution☆92Updated 3 years ago
- ☆27Updated 3 years ago
- [BMVC 2024] PlainMamba: Improving Non-hierarchical Mamba in Visual Recognition☆85Updated 9 months ago
- MMdet2-based reposity about lightweight detection model: Nanodet, PicoDet. Also including detection knowledge distillation method☆14Updated 4 years ago
- [NeurIPS 2024] official code release for our paper "Revisiting the Integration of Convolution and Attention for Vision Backbone".☆42Updated 11 months ago
- Official Pytorch implementation of Super Vision Transformer (IJCV)☆43Updated 2 years ago
- ☆28Updated 3 years ago
- ☆48Updated 2 years ago
- [ECCV 2022] AMixer: Adaptive Weight Mixing for Self-attention Free Vision Transformers☆29Updated 3 years ago
- Switchable Online Knowledge Distillation☆19Updated last year
- [ICML2024] DetKDS: Knowledge Distillation Search for Object Detectors☆17Updated last year
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆74Updated 3 years ago
- [AAAI 2022] This is the official PyTorch implementation of "Less is More: Pay Less Attention in Vision Transformers"☆97Updated 3 years ago
- EATFormer: Improving Vision Transformer Inspired by Evolutionary Algorithm☆35Updated 3 years ago
- [NeurIPS'22] Projector Ensemble Feature Distillation☆30Updated 2 years ago
- [CVPR'24] Official implementation of paper "FreeKD: Knowledge Distillation via Semantic Frequency Prompt".☆49Updated last year
- ☆36Updated 2 years ago
- ☆28Updated 2 years ago
- CVPR 2023, Class Attention Transfer Based Knowledge Distillation☆46Updated 2 years ago
- Offical Code for Paper "Exploring Inter-Channel Correlation for Diversity-preserved Knowledge Distillation"☆17Updated 3 years ago
- ☆30Updated last year
- Zone Evaluation: Revealing Spatial Bias in Object Detection (TPAMI 2024)☆48Updated last year
- PELA: Learning Parameter-Efficient Models with Low-Rank Approximation [CVPR 2024]☆19Updated last year
- ☆37Updated 2 months ago