VainF / Isomorphic-PruningLinks
[ECCV 2024] Isomorphic Pruning for Vision Models
☆77Updated last year
Alternatives and similar repositories for Isomorphic-Pruning
Users that are interested in Isomorphic-Pruning are comparing it to the libraries listed below
Sorting:
- PyTorch code and checkpoints release for VanillaKD: https://arxiv.org/abs/2305.15781☆75Updated last year
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆100Updated 2 years ago
- [CVPR 2024] PTQ4SAM: Post-Training Quantization for Segment Anything☆79Updated last year
- ☆12Updated last year
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆96Updated 2 years ago
- Training ImageNet / CIFAR models with sota strategies and fancy techniques such as ViT, KD, Rep, etc.☆84Updated last year
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆247Updated 2 years ago
- [NeurIPS 2023] Structural Pruning for Diffusion Models☆201Updated last year
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆34Updated 2 years ago
- [CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer☆74Updated last year
- ☆47Updated 2 years ago
- ☆23Updated 10 months ago
- (ICLR 2025) BinaryDM: Accurate Weight Binarization for Efficient Diffusion Models☆24Updated 11 months ago
- Pytorch implementation of our paper accepted by CVPR 2022 -- IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Sh…☆33Updated 3 years ago
- 1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundatio…☆222Updated last year
- Learnable Semi-structured Sparsity for Vision Transformers and Diffusion Transformers☆14Updated 7 months ago
- Recent Advances on Efficient Vision Transformers☆53Updated 2 years ago
- super-resolution; post-training quantization; model compression☆12Updated last year
- [TMLR] Official PyTorch implementation of paper "Quantization Variation: A New Perspective on Training Transformers with Low-Bit Precisio…☆46Updated 11 months ago
- The official implementation for paper: Improving Knowledge Distillation via Regularizing Feature Norm and Direction☆22Updated 2 years ago
- [NeurIPS 2023] MCUFormer: Deploying Vision Transformers on Microcontrollers with Limited Memory☆71Updated last year
- Join the High Accuracy Club on ImageNet with A Binary Neural Network Ticket☆70Updated 2 years ago
- PyTorch code and checkpoints release for OFA-KD: https://arxiv.org/abs/2310.19444☆132Updated last year
- [ICCV 2023] Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks☆25Updated last year
- [AAAI 2024] Understanding the Role of the Projector in Knowledge Distillation☆18Updated last year
- ☆22Updated last year
- [ICML 2024] Official PyTorch implementation of "SLAB: Efficient Transformers with Simplified Linear Attention and Progressive Re-paramete…☆106Updated last year
- ☆13Updated last year
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- [ICCV 2025] QuEST: Efficient Finetuning for Low-bit Diffusion Models☆53Updated 2 months ago