lliai / DisWOT-CVPR2023Links
☆26Updated last year
Alternatives and similar repositories for DisWOT-CVPR2023
Users that are interested in DisWOT-CVPR2023 are comparing it to the libraries listed below
Sorting:
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 — Carrying out CNN Channel Pruning in a White Box☆18Updated 3 years ago
- [NeurIPS 2024] Search for Efficient LLMs☆14Updated 4 months ago
- Code for 'Multi-level Logit Distillation' (CVPR2023)☆64Updated 8 months ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆32Updated 2 years ago
- ☆12Updated last year
- The codebase for paper "PPT: Token Pruning and Pooling for Efficient Vision Transformer"☆23Updated 6 months ago
- PyTorch code and checkpoints release for VanillaKD: https://arxiv.org/abs/2305.15781☆75Updated last year
- ☆45Updated last year
- Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation. NeurIPS 2022.☆32Updated 2 years ago
- ☆27Updated 2 years ago
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆96Updated last year
- Pytorch implementation of our paper accepted by CVPR 2022 -- IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Sh…☆32Updated 3 years ago
- To appear in the 11th International Conference on Learning Representations (ICLR 2023).☆17Updated 2 years ago
- Training ImageNet / CIFAR models with sota strategies and fancy techniques such as ViT, KD, Rep, etc.☆82Updated last year
- [CVPR-2022] Official implementation for "Knowledge Distillation with the Reused Teacher Classifier".☆94Updated 2 years ago
- Official implementation for "Knowledge Distillation with Refined Logits".☆14Updated 9 months ago
- TF-FD☆20Updated 2 years ago
- Implementation of PGONAS for CVPR22W and RD-NAS for ICASSP23☆22Updated 2 years ago
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆16Updated last year
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆40Updated 2 years ago
- CVPR 2023, Class Attention Transfer Based Knowledge Distillation☆44Updated last year
- Fire Together Wire Together: A Dynamic Pruning Approach with Self-Supervised Mask Prediction☆11Updated 3 years ago
- ☆23Updated last year
- Code for Paper "Self-Distillation from the Last Mini-Batch for Consistency Regularization"☆40Updated 2 years ago
- Official implementation of the paper "Function-Consistent Feature Distillation" (ICLR 2023)☆29Updated last year
- Source code of our TNNLS paper "Boosting Convolutional Neural Networks with Middle Spectrum Grouped Convolution"☆12Updated 2 years ago
- Official implementation of paper "Masked Distillation with Receptive Tokens", ICLR 2023.☆68Updated 2 years ago
- Auto-Prox-AAAI24☆13Updated last year
- [ECCV-2022] Official implementation of MixSKD: Self-Knowledge Distillation from Mixup for Image Recognition && Pytorch Implementations of…☆106Updated 2 years ago