lliai / DisWOT-CVPR2023Links
☆28Updated 2 years ago
Alternatives and similar repositories for DisWOT-CVPR2023
Users that are interested in DisWOT-CVPR2023 are comparing it to the libraries listed below
Sorting:
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆70Updated last year
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Updated 2 years ago
- ☆28Updated 3 years ago
- Training ImageNet / CIFAR models with sota strategies and fancy techniques such as ViT, KD, Rep, etc.☆87Updated last year
- [NeurIPS 2024] Search for Efficient LLMs☆15Updated 11 months ago
- ☆48Updated 2 years ago
- [ICML2024] DetKDS: Knowledge Distillation Search for Object Detectors☆17Updated last year
- Official implementation for "Knowledge Distillation with Refined Logits".☆21Updated last year
- Switchable Online Knowledge Distillation☆19Updated last year
- Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation. NeurIPS 2022.☆33Updated 3 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆34Updated 2 years ago
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆154Updated 3 years ago
- PyTorch code and checkpoints release for VanillaKD: https://arxiv.org/abs/2305.15781☆76Updated 2 years ago
- [ECCV-2022] Official implementation of MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition && Pytorch Implementations of…☆110Updated 3 years ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆41Updated 3 months ago
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆74Updated 3 years ago
- TF-FD☆20Updated 3 years ago
- ☆23Updated last year
- ☆13Updated 2 years ago
- Official implementation of the paper "Function-Consistent Feature Distillation" (ICLR 2023)☆30Updated 2 years ago
- Awesome Knowledge-Distillation for CV☆91Updated last year
- CVPR 2023, Class Attention Transfer Based Knowledge Distillation☆46Updated 2 years ago
- Auto-Prox-AAAI24☆14Updated last year
- [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".☆100Updated 3 years ago
- Code for Paper "Self-Distillation from the Last Mini-Batch for Consistency Regularization"☆43Updated 3 years ago
- [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.☆75Updated 2 years ago
- BESA is a differentiable weight pruning technique for large language models.☆17Updated last year
- Official pytorch implementation for CVPR2022 paper "Bootstrapping ViTs: Towards Liberating Vision Transformers from Pre-training"☆19Updated 3 years ago
- Official implementation of paper "Masked Distillation with Receptive Tokens", ICLR 2023.☆71Updated 2 years ago
- Official PyTorch implementation of PS-KD☆89Updated 3 years ago