luilui97 / DSPP
ACCV2022 Source Code of paper "Feature Decoupled Knowledge Distillation via Spatial Pyramid Pooling"
☆11Updated last year
Alternatives and similar repositories for DSPP:
Users that are interested in DSPP are comparing it to the libraries listed below
- Source Code for "Dual-Level Knowledge Distillation via Knowledge Alignment and Correlation", TNNLS, https://ieeexplore.ieee.org/abstract/…☆12Updated 2 years ago
- SHAKE☆18Updated last year
- Switchable Online Knowledge Distillation☆18Updated 5 months ago
- This is an official implementation of our CVPR 2020 paper "Non-Local Neural Networks With Grouped Bilinear Attentional Transforms".☆12Updated 4 years ago
- ☆22Updated 2 years ago
- TF-FD☆20Updated 2 years ago
- Distilling knowledge from ensemble of multiple teacher networks to student network with multiple heads☆7Updated 3 years ago
- PyTorch code for the paper "CrossTransformers: spatially-aware few-shot transfer"☆23Updated 4 years ago
- ☆12Updated 2 weeks ago
- ☆27Updated 2 years ago
- Official implementation of the paper "Function-Consistent Feature Distillation" (ICLR 2023)☆28Updated last year
- Official implementation for "Knowledge Distillation with Refined Logits".☆13Updated 7 months ago
- This is the official code for paper: Token Summarisation for Efficient Vision Transformers via Graph-based Token Propagation☆27Updated last year
- Pytorch implementation (TPAMI 2023) - Training Compact CNNs for Image Classification using Dynamic-coded Filter Fusion☆19Updated 2 years ago
- CVPR 2023, Class Attention Transfer Based Knowledge Distillation☆41Updated last year
- [TPAMI-2023] Official implementations of L-MCL: Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition☆23Updated last year
- ☆25Updated last year
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 -- Distilling a Powerful Student Model via Online Knowledge Distillation☆28Updated 3 years ago
- Official Code for "Knowing what it is: Semantic-enhanced Dual Attention Transformer" (TMM2022)☆19Updated 2 years ago
- 🔥MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer [Official, ICLR 2023]☆22Updated last year
- ☆43Updated 2 years ago
- Learning from Limited and Imperfect Data (L2ID): Classification Challenges☆18Updated 4 years ago
- Official Implementation of "Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning", in CVPR2023.☆10Updated last year
- ☆19Updated 3 years ago
- Information Bottleneck Approach to Spatial Attention Learning, IJCAI2021☆15Updated 3 years ago
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Updated last year
- [ICLR 2023] “ Layer Grafted Pre-training: Bridging Contrastive Learning And Masked Image Modeling For Better Representations”, Ziyu Jian…☆24Updated 2 years ago
- [MM'22] Official implementation of paper "Avatar Knowledge Distillation: Self-ensemble Teacher Paradigm with Uncertainty".☆13Updated last year
- Official repository for ACCV 2020 paper 'Class-Wise Difficulty-Balanced Loss for Solving Class-Imbalance'☆18Updated 3 years ago
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆60Updated 6 months ago