HankYe / Once-for-BothLinks
[CVPR'24] Once for Both: Single Stage of Importance and Sparsity Search for Vision Transformer Compression
☆15Updated last year
Alternatives and similar repositories for Once-for-Both
Users that are interested in Once-for-Both are comparing it to the libraries listed below
Sorting:
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆95Updated last year
- [NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks☆134Updated last year
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆105Updated 2 years ago
- [ICCV'25] The official code of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"☆69Updated 3 weeks ago
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆52Updated last year
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference" and "Sp…☆237Updated last month
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers☆105Updated last year
- The code repository of "MBQ: Modality-Balanced Quantization for Large Vision-Language Models"☆75Updated 10 months ago
- [ECCV 2024] Isomorphic Pruning for Vision Models☆81Updated last year
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆106Updated 7 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆163Updated 4 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆141Updated 11 months ago
- Give us minutes, we give back a faster Mamba. The official implementation of "Faster Vision Mamba is Rebuilt in Minutes via Merged Token …☆40Updated last year
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆84Updated 3 months ago
- [ICLR 2025] Mixture Compressor for Mixture-of-Experts LLMs Gains More☆66Updated last year
- [CVPR 2024 Highlight & TPAMI 2025] This is the official PyTorch implementation of "TFMQ-DM: Temporal Feature Maintenance Quantization for…☆108Updated 4 months ago
- [CVPR 2025] DivPrune: Diversity-based Visual Token Pruning for Large Multimodal Models☆65Updated 2 months ago
- [NeurIPS 2025] Official code for paper: Beyond Attention or Similarity: Maximizing Conditional Diversity for Token Pruning in MLLMs.☆86Updated 4 months ago
- [CVPR 2025] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models☆99Updated 2 months ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆553Updated last year
- [CVPR 2024] PTQ4SAM: Post-Training Quantization for Segment Anything☆82Updated last year
- DiTAS: Quantizing Diffusion Transformers via Enhanced Activation Smoothing (WACV 2025)☆12Updated this week
- Lossless Training Speed Up by Unbiased Dynamic Data Pruning☆343Updated last year
- [NeurIPS 2024 Oral🔥] DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs.☆180Updated last year
- [EMNLP 2025 main 🔥] Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆103Updated 4 months ago
- Pruning the VLLMs☆105Updated last year
- [CVPR 2025] The official implementation of "CacheQuant: Comprehensively Accelerated Diffusion Models"☆44Updated 3 months ago
- The official implementation of the NeurIPS 2022 paper Q-ViT.☆106Updated 2 years ago
- 📚 Collection of token-level model compression resources.☆190Updated 5 months ago
- A list of papers, docs, codes about efficient AIGC. This repo is aimed to provide the info for efficient AIGC research, including languag…☆205Updated last year