HankYe / Once-for-BothLinks
[CVPR'24] Once for Both: Single Stage of Importance and Sparsity Search for Vision Transformer Compression
☆14Updated last year
Alternatives and similar repositories for Once-for-Both
Users that are interested in Once-for-Both are comparing it to the libraries listed below
Sorting:
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆99Updated 2 years ago
- [ECCV 2024] Isomorphic Pruning for Vision Models☆74Updated last year
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers.☆105Updated 8 months ago
- [NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks☆130Updated 9 months ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆79Updated 8 months ago
- [ICCV'25] The official code of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"☆56Updated last week
- Give us minutes, we give back a faster Mamba. The official implementation of "Faster Vision Mamba is Rebuilt in Minutes via Merged Token …☆39Updated 8 months ago
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆47Updated 8 months ago
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆50Updated 6 months ago
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".☆151Updated 3 months ago
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for…☆103Updated last month
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆55Updated 5 months ago
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆247Updated 2 years ago
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆96Updated 2 years ago
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆116Updated 5 months ago
- DiTAS: Quantizing Diffusion Transformers via Enhanced Activation Smoothing (WACV 2025)☆11Updated 9 months ago
- A list of papers, docs, codes about efficient AIGC. This repo is aimed to provide the info for efficient AIGC research, including languag…☆188Updated 6 months ago
- [CVPR 2025] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models☆72Updated 2 months ago
- [CVPR 2024] PTQ4SAM: Post-Training Quantization for Segment Anything☆79Updated last year
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆165Updated 11 months ago
- [ECCV 2024] AdaLog: Post-Training Quantization for Vision Transformers with Adaptive Logarithm Quantizer☆30Updated 8 months ago
- [ICCV2023] Dataset Quantization☆260Updated last year
- PyTorch implementation of PTQ4DiT https://arxiv.org/abs/2405.16005☆32Updated 9 months ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆23Updated last year
- [ICCV2025 highlight]Rectifying Magnitude Neglect in Linear Attention☆34Updated last month
- [AAAI 2025] Linear-complexity Visual Sequence Learning with Gated Linear Attention☆111Updated last year
- [COLM 2025] DFRot: Achieving Outlier-Free and Massive Activation-Free for Rotated LLMs with Refined Rotation; 知乎:https://zhuanlan.zhihu.c…☆26Updated 6 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆120Updated 5 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆69Updated 7 months ago
- [ICML 2024] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers.☆34Updated 8 months ago