yoshitomo-matsubara / torchdistillLinks
A coding-free framework built on PyTorch for reproducible deep learning studies. PyTorch Ecosystem. π26 knowledge distillation methods presented at CVPR, ICLR, ECCV, NeurIPS, ICCV, etc are implemented so far. π Trained models, training logs and configurations are available for ensuring the reproducibiliy and benchmark.
β1,540Updated 2 weeks ago
Alternatives and similar repositories for torchdistill
Users that are interested in torchdistill are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of various Knowledge Distillation (KD) methods.β1,711Updated 3 years ago
- A Pytorch Knowledge Distillation library for benchmarking and extending works in the domains of Knowledge Distillation, Pruning, and Quanβ¦β641Updated 2 years ago
- A PyTorch implementation for exploring deep and shallow knowledge distillation (KD) experiments with flexibilityβ1,961Updated 2 years ago
- [ICLR 2020] Contrastive Representation Distillation (CRD), and benchmark of recent knowledge distillation methodsβ2,362Updated last year
- SAM: Sharpness-Aware Minimization (PyTorch)β1,910Updated last year
- Awesome Knowledge-Distillation. εη±»ζ΄ηηη₯θ―θΈι¦paper(2014-2021)γβ2,622Updated 2 years ago
- This is a collection of our NAS and Vision Transformer work.β1,791Updated last year
- The official implementation of [CVPR2022] Decoupled Knowledge Distillation https://arxiv.org/abs/2203.08679 and [ICCV2023] DOT: A Distillβ¦β867Updated last year
- [CVPR 2023] DepGraph: Towards Any Structural Pruning; LLMs, Vision Foundation Models, etc.β3,108Updated last month
- Collection of common code that's shared among different research projects in FAIR computer vision team.β2,159Updated 3 weeks ago
- OpenMMLab Model Compression Toolbox and Benchmark.β1,618Updated last year
- A curated list of neural network pruning resources.β2,468Updated last year
- EfficientFormerV2 [ICCV 2023] & EfficientFormer [NeurIPs 2022]β1,064Updated 2 years ago
- Efficient computing methods developed by Huawei Noah's Ark Labβ1,290Updated 9 months ago
- This is a collection of our zero-cost NAS and efficient vision applications.β427Updated last year
- Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)β536Updated 9 months ago
- Implementation of ConvMixer for "Patches Are All You Need? π€·"β1,076Updated 2 years ago
- [ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deploymentβ1,926Updated last year
- solo-learn: a library of self-supervised methods for visual representation learning powered by Pytorch Lightningβ1,506Updated this week
- PoolFormer: MetaFormer Is Actually What You Need for Vision (CVPR 2022 Oral)β1,347Updated last year
- β605Updated last month
- knowledge distillation papersβ758Updated 2 years ago
- VISSL is FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.β3,286Updated last year
- An All-MLP solution for Vision, from Google AIβ1,036Updated last month
- Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paperβ769Updated 2 years ago
- Codebase for Image Classification Research, written in PyTorch.β2,160Updated last year
- Knowledge Distillation: CVPR2020 Oral, Revisiting Knowledge Distillation via Label Smoothing Regularizationβ583Updated 2 years ago
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057β1,281Updated 3 years ago
- Official DeiT repositoryβ4,246Updated last year
- CVNets: A library for training computer vision networksβ1,909Updated last year