Source Code for "Dual-Level Knowledge Distillation via Knowledge Alignment and Correlation", TNNLS, https://ieeexplore.ieee.org/abstract/document/9830618
☆12Dec 21, 2022Updated 3 years ago
Alternatives and similar repositories for DLKD
Users that are interested in DLKD are comparing it to the libraries listed below
Sorting:
- ACCV2022 Source Code of paper "Feature Decoupled Knowledge Distillation via Spatial Pyramid Pooling"☆12Jul 5, 2023Updated 2 years ago
- ☆27Jun 20, 2021Updated 4 years ago
- Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)☆119Feb 9, 2021Updated 5 years ago
- A simple reimplement Online Knowledge Distillation via Collaborative Learning with pytorch☆50Dec 13, 2022Updated 3 years ago
- Official PyTorch implementation of "Learning with Memory-based Virtual Classes for Deep Metric Learning" (ICCV 2021)☆16Oct 13, 2021Updated 4 years ago
- ☆24May 6, 2022Updated 3 years ago
- [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆76Jul 6, 2023Updated 2 years ago
- Implementation of the Heterogeneous Knowledge Distillation using Information Flow Modeling method☆25May 25, 2020Updated 5 years ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 -- Distilling a Powerful Student Model via Online Knowledge Distillation☆31Nov 11, 2021Updated 4 years ago
- PyTorch implementation for "Gated Transfer Network for Transfer Learning"☆11Jun 3, 2019Updated 6 years ago
- Graph Knowledge Distillation☆13Mar 6, 2020Updated 6 years ago
- Code for "Self-Distillation as Instance-Specific Label Smoothing"☆16Oct 22, 2020Updated 5 years ago
- Role-Wise Data Augmentation for Knowledge Distillation☆19Nov 22, 2022Updated 3 years ago
- ☆27Feb 6, 2021Updated 5 years ago
- pytorch implementation for paper, towards realistic predictors☆17Sep 26, 2018Updated 7 years ago
- Matlab demos for weighted higher-order tensor nuclear norm minimization, and its applications to hyperspectral image denoising.☆13Nov 26, 2024Updated last year
- Apache flink☆16Jul 12, 2025Updated 8 months ago
- pytorch☆11May 14, 2019Updated 6 years ago
- This is the code for the paper "A Scalable Neural Network for DSIC Affine Maximizer" in NeurIPS 2023.☆11Oct 21, 2023Updated 2 years ago
- Cloud-and-Learning compatible Automated vehicle Platform (Mirrored from Gitlab, please post issues to the Gitlab link)☆11Apr 17, 2022Updated 3 years ago
- Learning Compatible Embeddings, ICCV 2021☆33Aug 18, 2021Updated 4 years ago
- A C++/Qt project about visualizing the Ant Colony System algorithm (with some local search improvements).☆10Aug 20, 2023Updated 2 years ago
- ☆12Jan 22, 2021Updated 5 years ago
- [ECML PKDD 2025 Oral]☆10Jun 9, 2025Updated 9 months ago
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆71Sep 23, 2024Updated last year
- Learning both Weights and Connections for Efficient Neural Networks https://arxiv.org/abs/1506.02626☆18Oct 7, 2020Updated 5 years ago
- Knowledge Transfer via Dense Cross-layer Mutual-distillation (ECCV'2020)☆30Aug 19, 2020Updated 5 years ago
- A demonstration of Redis Stack with Vue, Express, and Node.js☆15Sep 28, 2023Updated 2 years ago
- Adaptive Inter-Class Similarity Distillation for Semantic Segmentation (MTAP 2025)☆29Nov 14, 2025Updated 4 months ago
- Official implementation for (Refine Myself by Teaching Myself : Feature Refinement via Self-Knowledge Distillation, CVPR-2021)☆103Apr 30, 2024Updated last year
- ☆14Oct 14, 2020Updated 5 years ago
- PyTorch Implementation of CVPR'19 (oral) - Mitigating Information Leakage in Image Representations: A Maximum Entropy Approach☆28Sep 30, 2019Updated 6 years ago
- A Go library for Todoist's REST API☆10Feb 15, 2021Updated 5 years ago
- Pytorch reproduction of Peer Collaborative Learning for Online Knowledge Distillation, AAAI2021☆21May 28, 2022Updated 3 years ago
- PyTorch implementation of "SNAPSHOT ENSEMBLES: TRAIN 1, GET M FOR FREE" [WIP]☆36May 20, 2017Updated 8 years ago
- ☆47Sep 9, 2021Updated 4 years ago
- [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".☆103Jun 16, 2022Updated 3 years ago
- Reads the jagexcache☆17Jan 27, 2016Updated 10 years ago
- AMTML-KD: Adaptive Multi-teacher Multi-level Knowledge Distillation☆66Mar 9, 2021Updated 5 years ago