shriramsb / Distilling-the-Knowledge-in-a-Neural-NetworkLinks
Demonstration of transfer of knowledge and generalization with distillation
☆55Updated 7 years ago
Alternatives and similar repositories for Distilling-the-Knowledge-in-a-Neural-Network
Users that are interested in Distilling-the-Knowledge-in-a-Neural-Network are comparing it to the libraries listed below
Sorting:
- A pytorch implementation of paper 'Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation', …☆180Updated 3 years ago
- PyTorch implementation of "Distilling the Knowledge in a Neural Network"☆65Updated 4 years ago
- A simple reimplement Online Knowledge Distillation via Collaborative Learning with pytorch☆50Updated 3 years ago
- Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)☆118Updated 4 years ago
- ☆128Updated 5 years ago
- Reproduce CKA: Similarity of Neural Network Representations Revisited☆311Updated 5 years ago
- This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).☆82Updated 10 months ago
- This is the implementation for the ICASSP-2022 paper (Confidence-Aware Multi-Teacher Knowledge Distillation).☆63Updated 3 years ago
- Implementation of Vision Transformer from scratch and performance compared to standard CNNs (ResNets) and pre-trained ViT on CIFAR10 and …☆118Updated last year
- Implementation of Contrastive Learning with Adversarial Examples☆29Updated 5 years ago
- Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆30Updated 4 years ago
- Here is the official implementation of the model KD3A in paper "KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation via Knowl…☆119Updated 3 years ago
- Code for Paper "Self-Distillation from the Last Mini-Batch for Consistency Regularization"☆43Updated 3 years ago
- AMTML-KD: Adaptive Multi-teacher Multi-level Knowledge Distillation☆65Updated 4 years ago
- [TPAMI-2023] Official implementations of L-MCL: Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition☆26Updated 2 years ago
- An Numpy and PyTorch Implementation of CKA-similarity with CUDA support☆94Updated 4 years ago
- A PyTorch implementation of MoCo based on CVPR 2020 paper "Momentum Contrast for Unsupervised Visual Representation Learning"☆56Updated 5 years ago
- Official Repository for MocoSFL (accepted by ICLR '23, notable 5%)☆53Updated 2 years ago
- ☆110Updated 2 years ago
- Feature Fusion for Online Mutual Knowledge Distillation Code☆27Updated 5 years ago
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆71Updated last year
- When Does Label Smoothing Help?_pytorch_implementationimp☆126Updated 6 years ago
- [AAAI-2022] Up to 100x Faster Data-free Knowledge Distillation☆76Updated 3 years ago
- IJCAI 2021, "Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation"☆42Updated 2 years ago
- Benchmarking various Computer Vision models on TinyImageNet Dataset☆34Updated 2 years ago
- NeurIPS 2021, "Fine Samples for Learning with Noisy Labels"☆41Updated 4 years ago
- pytorch-tiny-imagenet☆188Updated 2 weeks ago
- Confidence-Aware Learning for Deep Neural Networks (ICML2020)☆74Updated 5 years ago
- Reproducing RigL (ICML 2020) as a part of ML Reproducibility Challenge 2020☆29Updated 4 years ago
- Official [ICLR] Code Repository for "Gradient Projection Memory for Continual Learning"☆100Updated 4 years ago