yoshitomo-matsubara / torchdistillLinks
A coding-free framework built on PyTorch for reproducible deep learning studies. PyTorch Ecosystem. π26 knowledge distillation methods presented at CVPR, ICLR, ECCV, NeurIPS, ICCV, etc are implemented so far. π Trained models, training logs and configurations are available for ensuring the reproducibiliy and benchmark.
β1,568Updated last week
Alternatives and similar repositories for torchdistill
Users that are interested in torchdistill are comparing it to the libraries listed below
Sorting:
- A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quanβ¦β647Updated 2 years ago
- Pytorch implementation of various Knowledge Distillation (KD) methods.β1,727Updated 3 years ago
- This is a collection of our NAS and Vision Transformer work.β1,807Updated last year
- SAM: Sharpness-Aware Minimization (PyTorch)β1,934Updated last year
- A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibilityβ1,970Updated 2 years ago
- EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]β1,086Updated 2 years ago
- [ICLR 2020] Contrastive Representation Distillation (CRD), and benchmark of recent knowledge distillation methodsβ2,392Updated 2 years ago
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)β1,354Updated last year
- Awesome Knowledge-Distillation. εη±»ζ΄ηηη₯θ―θΈι¦paper(2014-2021)γβ2,635Updated 2 years ago
- The official implementation of [CVPR2022] Decoupled Knowledge Distillation https://arxiv.org/abs/2203.08679 and [ICCV2023] DOT: A Distillβ¦β876Updated 2 years ago
- Collection of common code that's shared among different research projects in FAIR computer vision team.β2,194Updated 2 months ago
- [ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deploymentβ1,937Updated last year
- A curated list of neural network pruning resources.β2,481Updated last year
- (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"β818Updated 3 years ago
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)β536Updated last year
- OpenMMLab Model Compression Toolbox and Benchmark.β1,640Updated last year
- Implementation of ConvMixer for "Patches Are All You Need? π€·"β1,077Updated 3 years ago
- An All-MLP solution for Vision, from Google AIβ1,050Updated 4 months ago
- Efficient computing methods developed by Huawei Noah's Ark Labβ1,299Updated last year
- CVNets: A library for training computer vision networksβ1,926Updated 2 years ago
- Official DeiT repositoryβ4,277Updated last year
- β607Updated 2 months ago
- Explainability for Vision Transformersβ1,014Updated 3 years ago
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057β1,306Updated 3 years ago
- Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paperβ776Updated 2 years ago
- [CVPR 2023] DepGraph: Towards Any Structural Pruning; LLMs, Vision Foundation Models, etc.β3,175Updated 2 months ago
- Knowledge Distillation: CVPR2020 Oral, Revisiting Knowledge Distillation via Label Smoothing Regularizationβ584Updated 2 years ago
- Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)β2,091Updated 3 years ago
- solo-learn: a library of self-supervised methods for visual representation learning powered by Pytorch Lightningβ1,526Updated 3 weeks ago
- ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNetβ1,188Updated 2 years ago