HankYe / Once-for-BothLinks
[CVPR'24] Once for Both: Single Stage of Importance and Sparsity Search for Vision Transformer Compression
☆15Updated last year
Alternatives and similar repositories for Once-for-Both
Users that are interested in Once-for-Both are comparing it to the libraries listed below
Sorting:
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers☆105Updated 11 months ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆91Updated 10 months ago
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆101Updated 2 years ago
- [ICCV'25] The official code of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"☆68Updated this week
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".☆193Updated 5 months ago
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆66Updated 8 months ago
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆51Updated 11 months ago
- [ECCV 2024] Isomorphic Pruning for Vision Models☆78Updated last year
- [NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks☆133Updated last year
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆62Updated 9 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆134Updated 8 months ago
- [CVPR 2024] PTQ4SAM: Post-Training Quantization for Segment Anything☆82Updated last year
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆176Updated last year
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆101Updated 2 years ago
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for…☆109Updated 2 months ago
- torch_quantizer is a out-of-box quantization tool for PyTorch models on CUDA backend, specially optimized for Diffusion Models.☆22Updated last year
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆249Updated 2 years ago
- Give us minutes, we give back a faster Mamba. The official implementation of "Faster Vision Mamba is Rebuilt in Minutes via Merged Token …☆40Updated 11 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆154Updated 2 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆75Updated last month
- [CVPR 2025] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models☆90Updated last week
- [CVPR 2023] PD-Quant: Post-Training Quantization Based on Prediction Difference Metric☆60Updated 2 years ago
- [ICML 2024] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers☆34Updated 11 months ago
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆97Updated 5 months ago
- DiTAS: Quantizing Diffusion Transformers via Enhanced Activation Smoothing (WACV 2025)☆12Updated last year
- MADTP: Multimodal Alignment-Guided Dynamic Token Pruning for Accelerating Vision-Language Transformer☆49Updated last year
- A list of papers, docs, codes about efficient AIGC. This repo is aimed to provide the info for efficient AIGC research, including languag…☆201Updated 9 months ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆519Updated 10 months ago
- [ICLR'25] ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation☆138Updated 8 months ago
- [COLM 2025] DFRot: Achieving Outlier-Free and Massive Activation-Free for Rotated LLMs with Refined Rotation; 知乎:https://zhuanlan.zhihu.c…☆28Updated 8 months ago