ZuchniakK / MTKDLinks
Multi-Teacher Knowledge Distillation, code for my PhD dissertation. I used knowledge distillation as a decision-fusion and compressing mechanism for ensemble models.
☆23Updated 2 years ago
Alternatives and similar repositories for MTKD
Users that are interested in MTKD are comparing it to the libraries listed below
Sorting:
- This is the implementation for the ICME-2023 paper (Adaptive Multi-Teacher Knowledge Distillation with Meta-Learning).☆30Updated 2 years ago
- Training ImageNet / CIFAR models with sota strategies and fancy techniques such as ViT, KD, Rep, etc.☆86Updated last year
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆152Updated 2 years ago
- [AAAI 2023] Official PyTorch Code for "Curriculum Temperature for Knowledge Distillation"☆180Updated 11 months ago
- Wavelet-Attention CNN for Image Classification☆32Updated 3 years ago
- The official repo for CVPR2023 highlight paper "Gradient Norm Aware Minimization Seeks First-Order Flatness and Improves Generalization".☆84Updated 2 years ago
- Re-implementation of Online Label Smoothing.☆20Updated 4 years ago
- A pytorch implementation of paper 'Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation', …☆181Updated 3 years ago
- Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLP, S2MLPv2, RaftMLP, HireMLP, ConvMLP, AS-MLP, SparseMLP, Co…☆170Updated 3 years ago
- An official codebase of paper "Revisiting Sparse Convolutional Model for Visual Recognition"☆125Updated 2 years ago
- This repository periodicly updates the MTL paper and resources☆73Updated 3 months ago
- PyTorch code and checkpoints release for OFA-KD: https://arxiv.org/abs/2310.19444☆132Updated last year
- 对卷积神经网络提取的每一层特征用t-SNE进行降维可视化☆22Updated 3 years ago
- Code for Paper "Self-Distillation from the Last Mini-Batch for Consistency Regularization"☆43Updated 3 years ago
- ☆11Updated 2 years ago
- This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).☆81Updated 8 months ago
- Elsevier Templates-Latex☆65Updated 6 months ago
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆70Updated last year
- [CVPR 2022] TVConv: Efficient Translation Variant Convolution for Layout-aware Visual Processing☆48Updated 3 years ago
- ☆16Updated 4 years ago
- Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)☆118Updated 4 years ago
- ☆151Updated last year
- The official code for the paper "Delving Deep into Label Smoothing", IEEE TIP 2021☆81Updated 3 years ago
- Official PyTorch implementation of PS-KD☆89Updated 3 years ago
- 'NKD and USKD' (ICCV 2023) and 'ViTKD' (CVPRW 2024)☆238Updated 2 years ago
- [ECCV-2022] Official implementation of MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition && Pytorch Implementations of…☆109Updated 2 years ago
- Official code release of our paper "EViT: An Eagle Vision Transformer with Bi-Fovea Self-Attention"☆21Updated 3 months ago
- Unofficial implementation of ECCV 2020 paper "Feature Space Augmentation for Long-Tailed Data"☆25Updated 3 years ago
- [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".☆100Updated 3 years ago
- channel pruning for accelerating very deep neural networks☆13Updated 4 years ago