Meta-knowledge-Lab / DLB
Code for Paper "Self-Distillation from the Last Mini-Batch for Consistency Regularization"
☆41Updated 2 years ago
Alternatives and similar repositories for DLB:
Users that are interested in DLB are comparing it to the libraries listed below
- Official PyTorch implementation of PS-KD☆84Updated 2 years ago
- A simple reimplement Online Knowledge Distillation via Collaborative Learning with pytorch☆47Updated 2 years ago
- [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".☆92Updated 2 years ago
- [AAAI-2021, TKDE-2023] Official implementation for "Cross-Layer Distillation with Semantic Calibration".☆74Updated 6 months ago
- [ECCV-2022] Official implementation of MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition && Pytorch Implementations of…☆102Updated 2 years ago
- ☆33Updated last year
- ☆125Updated 4 years ago
- Switchable Online Knowledge Distillation☆18Updated 3 months ago
- S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)☆64Updated 3 years ago
- Official implementation for (Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation, CVPR-2021)☆97Updated 9 months ago
- The implementation of AAAI 2021 Paper: "Progressive Network Grafting for Few-Shot Knowledge Distillation".☆32Updated 6 months ago
- ☆58Updated 2 years ago
- Self-distillation with Batch Knowledge Ensembling Improves ImageNet Classification☆82Updated 3 years ago
- Official implementation of the paper "Function-Consistent Feature Distillation" (ICLR 2023)☆27Updated last year
- Feature Fusion for Online Mutual Knowledge Distillation Code☆24Updated 4 years ago
- ☆45Updated 2 years ago
- Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation. NeurIPS 2022.☆32Updated 2 years ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 -- Distilling a Powerful Student Model via Online Knowledge Distillation☆28Updated 3 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- ☆57Updated 3 years ago
- ☆26Updated 4 years ago
- A pytorch implementation of paper 'Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation', …☆170Updated 3 years ago
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆140Updated 2 years ago
- Complementary Relation Contrastive Distillation☆14Updated 3 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆32Updated last year
- Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)☆117Updated 4 years ago
- Knowledge Transfer via Dense Cross-layer Mutual-distillation (ECCV'2020)☆30Updated 4 years ago
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆36Updated 2 years ago
- Implementation of Conv-based and Vit-based networks designed for CIFAR.☆72Updated 2 years ago
- Regularizing Class-wise Predictions via Self-knowledge Distillation (CVPR 2020)☆107Updated 4 years ago