SforAiDl / KD_LibLinks
A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quantization.
ā642Updated 2 years ago
Alternatives and similar repositories for KD_Lib
Users that are interested in KD_Lib are comparing it to the libraries listed below
Sorting:
- A coding-free framework built on PyTorch for reproducible deep learning studies. PyTorch Ecosystem. š26 knowledge distillation methods pā¦ā1,543Updated last month
- Knowledge Distillation: CVPR2020 Oral, Revisiting Knowledge Distillation via Label Smoothing Regularizationā584Updated 2 years ago
- PyTorch library to facilitate development and standardized evaluation of neural network pruning methods.ā431Updated 2 years ago
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)ā536Updated 9 months ago
- A large scale study of Knowledge Distillation.ā220Updated 5 years ago
- š Toolbox to extend PyTorch functionalitiesā420Updated last year
- Code for Noisy Student Training. https://arxiv.org/abs/1911.04252ā764Updated 4 years ago
- NFNets and Adaptive Gradient Clipping for SGD implemented in PyTorch. Find explanation at tourdeml.github.io/blog/ā348Updated last year
- Pytorch implementation of various Knowledge Distillation (KD) methods.ā1,713Updated 3 years ago
- Estimate/count FLOPS for a given neural network using pytorchā306Updated 3 years ago
- ā605Updated this week
- Compare neural networks by their feature similarityā370Updated 2 years ago
- On-the-fly Structured Pruning for PyTorch models. This library implements several attributions metrics and structured pruning utils for nā¦ā167Updated 5 years ago
- Open-source code for paper "Dataset Distillation"ā811Updated 2 months ago
- Official PyTorch implementation of "A Comprehensive Overhaul of Feature Distillation" (ICCV 2019)ā418Updated 5 years ago
- Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paperā772Updated 2 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)ā486Updated 4 years ago
- AugMix: A Simple Data Processing Method to Improve Robustness and Uncertaintyā990Updated 2 months ago
- knowledge distillation papersā758Updated 2 years ago
- [ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deploymentā1,925Updated last year
- Official Pytorch Implementation of "TResNet: High-Performance GPU-Dedicated Architecture" (WACV 2021)ā475Updated 8 months ago
- MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks. In NeurIPS 2020 workshop.ā696Updated 3 years ago
- A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibilityā1,961Updated 2 years ago
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"ā819Updated 3 years ago
- Code for Neural Architecture Search without Training (ICML 2021)ā472Updated 4 years ago
- A general and accurate MACs / FLOPs profiler for PyTorch modelsā626Updated 3 weeks ago
- Learning Rate Warmup in PyTorchā411Updated 2 months ago
- Implementation of ConvMixer for "Patches Are All You Need? š¤·"ā1,077Updated 2 years ago
- Awesome machine learning model compression research papers, quantization, tools, and learning material.ā531Updated 11 months ago
- Unofficial PyTorch Reimplementation of RandAugment.ā636Updated 2 years ago