xuyang-liu16 / Awesome-Token-level-Model-CompressionLinks
📚 Collection of token-level model compression resources.
☆98Updated this week
Alternatives and similar repositories for Awesome-Token-level-Model-Compression
Users that are interested in Awesome-Token-level-Model-Compression are comparing it to the libraries listed below
Sorting:
- Code release for VTW (AAAI 2025) Oral☆43Updated 4 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.☆61Updated 4 months ago
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".☆112Updated 2 weeks ago
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆76Updated 5 months ago
- ☆46Updated last month
- Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆46Updated last month
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model☆30Updated 4 months ago
- The official code implementation of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"☆40Updated last week
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Cont…☆40Updated 6 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆130Updated last year
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆105Updated 3 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆97Updated 6 months ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆73Updated 5 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆72Updated this week
- Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models☆25Updated 2 weeks ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆163Updated last week
- ✈️ Towards Stabilized and Efficient Diffusion Transformers through Long-Skip-Connections with Spectral Constraints☆67Updated 2 months ago
- ☆111Updated last week
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆36Updated last month
- [CVPR 2025] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models☆48Updated this week
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆46Updated 5 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆64Updated last month
- [ICLR2025 Oral] ChartMoE: Mixture of Diversely Aligned Expert Connector for Chart Understanding☆84Updated 2 months ago
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆36Updated 3 months ago
- ☆77Updated 4 months ago
- Official Repository: A Comprehensive Benchmark for Logical Reasoning in MLLMs☆30Updated last week
- This is the official implementation of our paper "QuoTA: Query-oriented Token Assignment via CoT Query Decouple for Long Video Comprehens…☆70Updated last month
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆97Updated 3 months ago
- ☆84Updated 2 months ago
- paper list, tutorial, and nano code snippet for Diffusion Large Language Models.☆51Updated this week