yzd-v / cls_KDView external linksLinks
'NKD and USKD' (ICCV 2023) and 'ViTKD' (CVPRW 2024)
☆243Oct 10, 2023Updated 2 years ago
Alternatives and similar repositories for cls_KD
Users that are interested in cls_KD are comparing it to the libraries listed below
Sorting:
- Masked Generative Distillation (ECCV 2022)☆240Nov 9, 2022Updated 3 years ago
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆155Dec 28, 2022Updated 3 years ago
- PyTorch code and checkpoints release for VanillaKD: https://arxiv.org/abs/2305.15781☆76Nov 21, 2023Updated 2 years ago
- [AAAI 2023] Official PyTorch Code for "Curriculum Temperature for Knowledge Distillation"☆182Dec 3, 2024Updated last year
- TF-FD☆20Nov 19, 2022Updated 3 years ago
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆71Sep 23, 2024Updated last year
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Sep 18, 2023Updated 2 years ago
- [ICML2024] DetKDS: Knowledge Distillation Search for Object Detectors☆19Jul 11, 2024Updated last year
- Official implementation of paper "Masked Distillation with Receptive Tokens", ICLR 2023.☆71Apr 14, 2023Updated 2 years ago
- The official implementation for paper: Improving Knowledge Distillation via Regularizing Feature Norm and Direction☆24Aug 3, 2023Updated 2 years ago
- CVPR 2023, Class Attention Transfer Based Knowledge Distillation☆46Jun 13, 2023Updated 2 years ago
- [NeurIPS'22] Projector Ensemble Feature Distillation☆30Jan 4, 2024Updated 2 years ago
- ☆267Nov 30, 2022Updated 3 years ago
- Localization Distillation for Object Detection (CVPR 2022, TPAMI 2023)☆388Oct 24, 2024Updated last year
- This repo is the official megengine implementation of the ECCV2022 paper: Efficient One Pass Self-distillation with Zipf's Label Smoothin…☆28Oct 19, 2022Updated 3 years ago
- [ECCV-2022] Official implementation of MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition && Pytorch Implementations of…☆110Nov 28, 2022Updated 3 years ago
- Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation. NeurIPS 2022.☆34Oct 18, 2022Updated 3 years ago
- [ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization☆28Dec 6, 2023Updated 2 years ago
- [CVPR 2024 Highlight] Logit Standardization in Knowledge Distillation☆391Oct 9, 2024Updated last year
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆109Jul 24, 2023Updated 2 years ago
- ☆88Aug 31, 2023Updated 2 years ago
- [ECCV 2024] This is the official implementation of "Stitched ViTs are Flexible Vision Backbones".☆29Jan 23, 2024Updated 2 years ago
- Official implementations of CIRKD: Cross-Image Relational Knowledge Distillation for Semantic Segmentation and implementations on Citysca…☆211Aug 29, 2025Updated 5 months ago
- CrossKD: Cross-Head Knowledge Distillation for Dense Object Detection☆196Sep 24, 2023Updated 2 years ago
- Training ImageNet / CIFAR models with sota strategies and fancy techniques such as ViT, KD, Rep, etc.☆87Mar 20, 2024Updated last year
- Official code for our ECCV'22 paper "A Fast Knowledge Distillation Framework for Visual Recognition"☆190Apr 29, 2024Updated last year
- Regularizing Class-wise Predictions via Self-knowledge Distillation (CVPR 2020)☆109Jun 18, 2020Updated 5 years ago
- Official Implementation of Bridging Cross-task Protocol Inconsistency for Distillation in Dense Object Detection☆50Oct 7, 2023Updated 2 years ago
- [NeurIPS 2024] official code release for our paper "Revisiting the Integration of Convolution and Attention for Vision Backbone".☆43Jan 21, 2025Updated last year
- Official PyTorch implementation of "Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets" (ICLR 2023 notable top 25%)☆26Mar 18, 2024Updated last year
- Official implementation for paper "Relational Surrogate Loss Learning", ICLR 2022☆37Nov 25, 2022Updated 3 years ago
- [ICLR 2023] "More ConvNets in the 2020s: Scaling up Kernels Beyond 51x51 using Sparsity"; [ICML 2023] "Are Large Kernels Better Teachers…☆284Jul 5, 2023Updated 2 years ago
- [CVPR 2023] Official repository for paper "Stare at What You See: Masked Image Modeling without Reconstruction"☆70Jul 2, 2025Updated 7 months ago
- [TPAMI-2023] Official implementations of L-MCL: Online Knowledge Distillation via Mutual Contrastive Learning for Visual Recognition☆26Jul 14, 2023Updated 2 years ago
- This is a collection of our NAS and Vision Transformer work.☆1,826Jul 25, 2024Updated last year
- [ECCV 2022] AMixer: Adaptive Weight Mixing for Self-attention Free Vision Transformers☆29Nov 14, 2022Updated 3 years ago
- [ECCV2020] Knowledge Distillation Meets Self-Supervision☆238Dec 15, 2022Updated 3 years ago
- Switchable Online Knowledge Distillation☆19Oct 27, 2024Updated last year
- Distilling Knowledge via Knowledge Review, CVPR 2021☆280Dec 16, 2022Updated 3 years ago