xuyang-liu16 / Awesome-Token-level-Model-CompressionLinks
📚 Collection of token-level model compression resources.
☆140Updated last week
Alternatives and similar repositories for Awesome-Token-level-Model-Compression
Users that are interested in Awesome-Token-level-Model-Compression are comparing it to the libraries listed below
Sorting:
- Code release for VTW (AAAI 2025) Oral☆44Updated 5 months ago
- Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆60Updated 2 months ago
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆82Updated 2 weeks ago
- ☆53Updated 2 months ago
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".☆128Updated last month
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆65Updated 6 months ago
- [arXiv 2025] Efficient Reasoning Models: A Survey☆227Updated this week
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Cont…☆46Updated 7 months ago
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model☆30Updated 6 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆139Updated 2 weeks ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆97Updated 8 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆126Updated last week
- Pruning the VLLMs☆97Updated 7 months ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆78Updated 6 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆115Updated 4 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆116Updated last week
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆451Updated 6 months ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆180Updated 3 months ago
- [ICCV'25] The official code implementation of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Langua…☆47Updated this week
- paper list, tutorial, and nano code snippet for Diffusion Large Language Models.☆82Updated 3 weeks ago
- [ICLR2025 Oral] ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding☆86Updated 3 months ago
- 🚀 Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models☆30Updated last month
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆176Updated 3 weeks ago
- This is the official implementation of our paper "QuoTA: Query-oriented Token Assignment via CoT Query Decouple for Long Video Comprehens…☆73Updated 2 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆105Updated last month
- The Next Step Forward in Multimodal LLM Alignment☆169Updated 2 months ago
- Official repository for VisionZip (CVPR 2025)☆319Updated last month
- Official implementation of "Fast-dLLM: Training-free Acceleration of Diffusion LLM by Enabling KV Cache and Parallel Decoding"☆278Updated last week
- [NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks☆128Updated 7 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆39Updated 2 months ago