📚 Collection of token-level model compression resources.
☆193Sep 3, 2025Updated 6 months ago
Alternatives and similar repositories for Awesome-Token-level-Model-Compression
Users that are interested in Awesome-Token-level-Model-Compression are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models☆40Jan 27, 2026Updated 2 months ago
- 📚 Collection of awesome generation acceleration resources.☆394Jul 7, 2025Updated 8 months ago
- [ICME 2024 Oral] DARA: Domain- and Relation-aware Adapters Make Parameter-efficient Tuning for Visual Grounding☆23Feb 26, 2025Updated last year
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model☆37Jan 8, 2025Updated last year
- [EMNLP 2025 main 🔥] Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆114Oct 12, 2025Updated 5 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [AAAI-2025] The offical code for SiTo (Similarity-based Token Pruning for Stable Diffusion Models)☆44Jun 2, 2025Updated 9 months ago
- Official PyTorch code for ICLR 2025 paper "Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Models"☆23Mar 4, 2025Updated last year
- [ICASSP 2024] VGDiffZero: Text-to-image Diffusion Models Can Be Zero-shot Visual Grounders☆17Feb 11, 2025Updated last year
- ☆35Jun 3, 2025Updated 9 months ago
- A paper list of some recent works about Token Compress for Vit and VLM☆861Mar 22, 2026Updated last week
- [EMNLP 2025 Main] Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆68Mar 13, 2026Updated 2 weeks ago
- [ICLR2025] Accelerating Diffusion Transformers with Token-wise Feature Caching☆214Mar 14, 2025Updated last year
- (ICLR 2026 🔥) Code for "The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs"☆76Feb 9, 2026Updated last month
- [ICCV2025] From Reusing to Forecasting: Accelerating Diffusion Models with TaylorSeers☆382Mar 2, 2026Updated 3 weeks ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [CVPR 2026] Variation-aware Vision Token Dropping for Faster Large Vision-Language Models☆28Mar 18, 2026Updated last week
- [ICCV'25] The official code of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"☆71Jan 13, 2026Updated 2 months ago
- ☆31Jun 14, 2024Updated last year
- (ToCa-v2) A New version of ToCa,with faster speed and better acceleration!☆41Mar 13, 2025Updated last year
- ☆49Mar 3, 2024Updated 2 years ago
- ☆66Jan 23, 2026Updated 2 months ago
- ☆19Jul 22, 2025Updated 8 months ago
- A collection of token reduction (token pruning, merging, clustering, etc.) techniques for ML/AI☆363Updated this week
- OmniZip: Audio-Guided Dynamic Token Compression for Fast Omnimodal Large Language Models☆66Mar 20, 2026Updated last week
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆36Oct 3, 2024Updated last year
- Official repository for VisionZip (CVPR 2025)☆416Jul 21, 2025Updated 8 months ago
- [AAAI 26'] This is the official pytorch implementation for paper: Filter, Correlate, Compress: Training-Free Token Reduction for MLLM Acc…☆59Nov 13, 2025Updated 4 months ago
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆56Oct 9, 2025Updated 5 months ago
- Code release for VTW (AAAI 2025 Oral)☆66Nov 4, 2025Updated 4 months ago
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆109Jun 29, 2025Updated 9 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆45Apr 18, 2025Updated 11 months ago
- ☆14Jun 22, 2022Updated 3 years ago
- [CVPR2025] Hybrid-Level Instruction Injection for Video Token Compression in Multi-modal Large Language Models☆19Apr 30, 2025Updated 11 months ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [EMNLP 2024 Main] MaPPER: Multimodal Prior-guided Parameter Efficient Tuning for Referring Expression Comprehension☆16Jan 6, 2025Updated last year
- Transactions on Multimedia (TMM25)☆19Apr 8, 2025Updated 11 months ago
- The codebase for paper "PPT: Token Pruning and Pooling for Efficient Vision Transformer"☆28Nov 17, 2024Updated last year
- [ICCV 2025] SparseMM: Head Sparsity Emerges from Visual Concept Responses in MLLMs☆82Jan 17, 2026Updated 2 months ago
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference" and "Sp…☆253Dec 22, 2025Updated 3 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cache…☆200Nov 17, 2025Updated 4 months ago
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generation☆236Aug 18, 2025Updated 7 months ago