shriramsb / Distilling-the-Knowledge-in-a-Neural-NetworkLinks
Demonstration of transfer of knowledge and generalization with distillation
☆55Updated 6 years ago
Alternatives and similar repositories for Distilling-the-Knowledge-in-a-Neural-Network
Users that are interested in Distilling-the-Knowledge-in-a-Neural-Network are comparing it to the libraries listed below
Sorting:
- A pytorch implementation of paper 'Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation', …☆181Updated 3 years ago
 - A simple reimplement Online Knowledge Distillation via Collaborative Learning with pytorch☆50Updated 2 years ago
 - Reproduce CKA: Similarity of Neural Network Representations Revisited☆310Updated 5 years ago
 - This resposity maintains a collection of important papers on knowledge distillation (awesome-knowledge-distillation)).☆81Updated 7 months ago
 - PyTorch implementation of "Distilling the Knowledge in a Neural Network"☆65Updated 3 years ago
 - This is the implementation for the ICASSP-2022 paper (Confidence-Aware Multi-Teacher Knowledge Distillation).☆63Updated 3 years ago
 - Data-Free Network Quantization With Adversarial Knowledge Distillation PyTorch☆30Updated 4 years ago
 - pytorch-tiny-imagenet☆186Updated last month
 - Implementation of Vision Transformer from scratch and performance compared to standard CNNs (ResNets) and pre-trained ViT on CIFAR10 and …☆114Updated last year
 - [ICLR-2020] Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers.☆31Updated 5 years ago
 - Benchmarking various Computer Vision models on TinyImageNet Dataset☆34Updated 2 years ago
 - Implementation of Contrastive Learning with Adversarial Examples☆29Updated 4 years ago
 - Elastic weight consolidation technique for incremental learning.☆150Updated 4 years ago
 - AMTML-KD: Adaptive Multi-teacher Multi-level Knowledge Distillation☆63Updated 4 years ago
 - A PyTorch implementation of MoCo based on CVPR 2020 paper "Momentum Contrast for Unsupervised Visual Representation Learning"☆55Updated 5 years ago
 - ☆127Updated 5 years ago
 - [AAAI-2020] Official implementation for "Online Knowledge Distillation with Diverse Peers".☆75Updated 2 years ago
 - When Does Label Smoothing Help?_pytorch_implementationimp☆126Updated 5 years ago
 - Official implementation for (Show, Attend and Distill: Knowledge Distillation via Attention-based Feature Matching, AAAI-2021)☆119Updated 4 years ago
 - An unofficial implementation of 《Deep Mutual Learning》 by Pytorch to do classification on cifar100.☆168Updated 5 years ago
 - Here is the official implementation of the model KD3A in paper "KD3A: Unsupervised Multi-Source Decentralized Domain Adaptation via Knowl…☆118Updated 3 years ago
 - Reproducing RigL (ICML 2020) as a part of ML Reproducibility Challenge 2020☆29Updated 3 years ago
 - Code for 'Multi-level Logit Distillation' (CVPR2023)☆70Updated last year
 - IJCAI 2021, "Comparing Kullback-Leibler Divergence and Mean Squared Error Loss in Knowledge Distillation"☆42Updated 2 years ago
 - Official [ICLR] Code Repository for "Gradient Projection Memory for Continual Learning"☆97Updated 4 years ago
 - Image classification on Tiny ImageNet☆75Updated 2 years ago
 - ICLR 2022 Paper submission trend analysis from https://openreview.net/group?id=ICLR.cc/2022/Conference☆85Updated 3 years ago
 - Feature Fusion for Online Mutual Knowledge Distillation Code☆26Updated 5 years ago
 - ☆109Updated 2 years ago
 - [ICML2020] Normalized Loss Functions for Deep Learning with Noisy Labels☆141Updated last year