yoshitomo-matsubara / torchdistillLinks
A coding-free framework built on PyTorch for reproducible deep learning studies. PyTorch Ecosystem. π26 knowledge distillation methods presented at CVPR, ICLR, ECCV, NeurIPS, ICCV, etc are implemented so far. π Trained models, training logs and configurations are available for ensuring the reproducibiliy and benchmark.
β1,562Updated this week
Alternatives and similar repositories for torchdistill
Users that are interested in torchdistill are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of various Knowledge Distillation (KD) methods.β1,721Updated 3 years ago
- A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quanβ¦β646Updated 2 years ago
- [ICLR 2020] Contrastive Representation Distillation (CRD), and benchmark of recent knowledge distillation methodsβ2,382Updated 2 years ago
- SAM: Sharpness-Aware Minimization (PyTorch)β1,923Updated last year
- This is a collection of our NAS and Vision Transformer work.β1,805Updated last year
- A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibilityβ1,966Updated 2 years ago
- Awesome Knowledge-Distillation. εη±»ζ΄ηηη₯θ―θΈι¦paper(2014-2021)γβ2,631Updated 2 years ago
- EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]β1,081Updated 2 years ago
- Collection of common code that's shared among different research projects in FAIR computer vision team.β2,189Updated last month
- [CVPR 2023] DepGraph: Towards Any Structural Pruning; LLMs, Vision Foundation Models, etc.β3,151Updated last month
- The official implementation of [CVPR2022] Decoupled Knowledge Distillation https://arxiv.org/abs/2203.08679 and [ICCV2023] DOT: A Distillβ¦β873Updated last year
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)β1,355Updated last year
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)β536Updated 11 months ago
- Efficient computing methods developed by Huawei Noah's Ark Labβ1,296Updated 11 months ago
- A curated list of neural network pruning resources.β2,477Updated last year
- β605Updated last month
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"β819Updated 3 years ago
- OpenMMLab Model Compression Toolbox and Benchmark.β1,632Updated last year
- Implementation of ConvMixer for "Patches Are All You Need? π€·"β1,077Updated 2 years ago
- Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paperβ774Updated 2 years ago
- Explainability for Vision Transformersβ1,009Updated 3 years ago
- Official DeiT repositoryβ4,271Updated last year
- CVNets: A library for training computer vision networksβ1,916Updated last year
- Codebase for Image Classification Research, written in PyTorch.β2,165Updated last year
- Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)β2,084Updated 3 years ago
- knowledge distillation papersβ762Updated 2 years ago
- CAIRI Supervised, Semi- and Self-Supervised Visual Representation Learning Toolbox and Benchmarkβ657Updated last month
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057β1,298Updated 3 years ago
- Adan: Adaptive Nesterov Momentum Algorithm for Faster Optimizing Deep Modelsβ799Updated 4 months ago
- [ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deploymentβ1,930Updated last year