VainF / Isomorphic-PruningLinks
[ECCV 2024] Isomorphic Pruning for Vision Models
☆79Updated last year
Alternatives and similar repositories for Isomorphic-Pruning
Users that are interested in Isomorphic-Pruning are comparing it to the libraries listed below
Sorting:
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆101Updated 2 years ago
- PyTorch code and checkpoints release for VanillaKD: https://arxiv.org/abs/2305.15781☆76Updated last year
- [CVPR 2024] PTQ4SAM: Post-Training Quantization for Segment Anything☆81Updated last year
- ☆13Updated 2 years ago
- [NeurIPS 2023] Structural Pruning for Diffusion Models☆203Updated last year
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆249Updated 2 years ago
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆98Updated 2 years ago
- ☆47Updated 2 years ago
- Training ImageNet / CIFAR models with sota strategies and fancy techniques such as ViT, KD, Rep, etc.☆85Updated last year
- ☆13Updated last year
- [TMLR] Official PyTorch implementation of paper "Quantization Variation: A New Perspective on Training Transformers with Low-Bit Precisio…☆46Updated last year
- 1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundatio…☆225Updated last year
- [CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer☆74Updated last year
- Implementation of Post-training Quantization on Diffusion Models (CVPR 2023)☆139Updated 2 years ago
- [ICCV 2023] Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks☆25Updated last year
- Learnable Semi-structured Sparsity for Vision Transformers and Diffusion Transformers☆14Updated 8 months ago
- [CVPR 2023] Castling-ViT: Compressing Self-Attention via Switching Towards Linear-Angular Attention During Vision Transformer Inference☆30Updated last year
- The official implementation of PTQD: Accurate Post-Training Quantization for Diffusion Models☆100Updated last year
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆90Updated last year
- The official implementation for paper: Improving Knowledge Distillation via Regularizing Feature Norm and Direction☆22Updated 2 years ago
- PyTorch code and checkpoints release for OFA-KD: https://arxiv.org/abs/2310.19444☆132Updated last year
- ☆23Updated last year
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆59Updated 2 years ago
- [ICLR 2024 Spotlight] This is the official PyTorch implementation of "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Di…☆66Updated last year
- [NeurIPS 2023] MCUFormer: Deploying Vision Transformers on Microcontrollers with Limited Memory☆73Updated last year
- Pytorch implementation of our paper accepted by CVPR 2022 -- IntraQ: Learning Synthetic Images with Intra-Class Heterogeneity for Zero-Sh…☆33Updated 3 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆33Updated 2 years ago
- super-resolution; post-training quantization; model compression☆13Updated last year
- [ICCV 2025] QuEST: Efficient Finetuning for Low-bit Diffusion Models☆53Updated 4 months ago
- (ICLR 2025) BinaryDM: Accurate Weight Binarization for Efficient Diffusion Models☆24Updated last year