SforAiDl / KD_LibLinks
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
β630Updated 2 years ago
Alternatives and similar repositories for KD_Lib
Users that are interested in KD_Lib are comparing it to the libraries listed below
Sorting:
- A coding-free framework built on PyTorch for reproducible deep learning studies. PyTorch Ecosystem. π25 knowledge distillation methods pβ¦β1,512Updated 3 weeks ago
- Pytorch implementation of various Knowledge Distillation (KD) methods.β1,698Updated 3 years ago
- Knowledge Distillation: CVPR2020 Oral, Revisiting Knowledge Distillation via Label Smoothing Regularizationβ585Updated 2 years ago
- A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibilityβ1,940Updated 2 years ago
- A large scale study of Knowledge Distillation.β220Updated 5 years ago
- PyTorch library to facilitate development and standardized evaluation of neural network pruning methods.β430Updated last year
- MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks. In NeurIPS 2020 workshop.β696Updated 3 years ago
- A general and accurate MACs / FLOPs profiler for PyTorch modelsβ613Updated last year
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)β485Updated 4 years ago
- Estimate/count FLOPS for a given neural network using pytorchβ304Updated 3 years ago
- π Toolbox to extend PyTorch functionalitiesβ421Updated last year
- Unofficial PyTorch Reimplementation of RandAugment.β636Updated 2 years ago
- Code for Noisy Student Training. https://arxiv.org/abs/1911.04252β761Updated 4 years ago
- Official PyTorch implementation of "A Comprehensive Overhaul of Feature Distillation" (ICCV 2019)β418Updated 4 years ago
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)β531Updated 7 months ago
- NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch. Find explanation at tourdeml.github.io/blog/β345Updated last year
- Compare neural networks by their feature similarityβ361Updated 2 years ago
- Awesome machine learning model compression research papers, quantization, tools, and learning material.β523Updated 8 months ago
- Summary, Code for Deep Neural Network Quantizationβ548Updated 7 months ago
- Collection of recent methods on (deep) neural network compression and acceleration.β950Updated 2 months ago
- Implementation of ConvMixer for "Patches Are All You Need? π€·"β1,072Updated 2 years ago
- AugMix: A Simple Data Processing Method to Improve Robustness and Uncertaintyβ989Updated this week
- Gradually-Warmup Learning Rate Scheduler for PyTorchβ988Updated 7 months ago
- Unofficial PyTorch implementation of "Meta Pseudo Labels"β387Updated last year
- [ICLR 2020] Lite Transformer with Long-Short Range Attentionβ608Updated 10 months ago
- Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)β1,514Updated 4 years ago
- The official implementation of [CVPR2022] Decoupled Knowledge Distillation https://arxiv.org/abs/2203.08679 and [ICCV2023] DOT: A Distillβ¦β856Updated last year
- Learning Rate Warmup in PyTorchβ410Updated 2 months ago
- [ICLR 2020] Contrastive Representation Distillation (CRD), and benchmark of recent knowledge distillation methodsβ2,339Updated last year
- β598Updated 7 months ago