π Collection of token-level model compression resources.
β195Sep 3, 2025Updated 8 months ago
Alternatives and similar repositories for Awesome-Token-level-Model-Compression
Users that are interested in Awesome-Token-level-Model-Compression are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Modelsβ42Jan 27, 2026Updated 3 months ago
- π Collection of awesome generation acceleration resources.β399Jul 7, 2025Updated 10 months ago
- [ICME 2024 Oral] DARA: Domain- and Relation-aware Adapters Make Parameter-efficient Tuning for Visual Groundingβ23Feb 26, 2025Updated last year
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Modelβ36Jan 8, 2025Updated last year
- [EMNLP 2025 main π₯] Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"β118Oct 12, 2025Updated 6 months ago
- GPU virtual machines on DigitalOcean Gradient AI β’ AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [AAAI-2025] The offical code for SiTo οΌSimilarity-based Token Pruning for Stable Diffusion ModelsοΌβ43Jun 2, 2025Updated 11 months ago
- Official PyTorch code for ICLR 2025 paper "Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Models"β23Mar 4, 2025Updated last year
- [ICASSP 2024] VGDiffZero: Text-to-image Diffusion Models Can Be Zero-shot Visual Groundersβ17Feb 11, 2025Updated last year
- β36Jun 3, 2025Updated 11 months ago
- A paper list of some recent works about Token Compress for Vit and VLMβ891Apr 14, 2026Updated 3 weeks ago
- [ICLR2025] Accelerating Diffusion Transformers with Token-wise Feature Cachingβ217Mar 14, 2025Updated last year
- (ICLR 2026 π₯) Code for "The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs"β77Feb 9, 2026Updated 3 months ago
- [EMNLP 2025 Main] Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Modelsβ124Apr 29, 2026Updated last week
- [ICCV2025] From Reusing to Forecasting: Accelerating Diffusion Models with TaylorSeersβ393Mar 2, 2026Updated 2 months ago
- GPU virtual machines on DigitalOcean Gradient AI β’ AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [CVPR 2026] Variation-aware Vision Token Dropping for Faster Large Vision-Language Modelsβ31Mar 18, 2026Updated last month
- [ICCV'25] The official code of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"β74Jan 13, 2026Updated 3 months ago
- β31Jun 14, 2024Updated last year
- β49Mar 3, 2024Updated 2 years ago
- β66Jan 23, 2026Updated 3 months ago
- β19Jul 22, 2025Updated 9 months ago
- [NeurIPS 2025] HoliTom: Holistic Token Merging for Fast Video Large Language Modelsβ78Oct 10, 2025Updated 6 months ago
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.β55Jul 8, 2024Updated last year
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Modelsβ36Oct 3, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer β’ AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Official repository for VisionZip (CVPR 2025)β428Jul 21, 2025Updated 9 months ago
- [CVPR 2026] OmniZip: Audio-Guided Dynamic Token Compression for Fast Omnimodal Large Language Modelsβ78Apr 20, 2026Updated 2 weeks ago
- Code release for VTW (AAAI 2025 Oral)β67Nov 4, 2025Updated 6 months ago
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.β112Jun 29, 2025Updated 10 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visioβ¦β45Apr 18, 2025Updated last year
- [AAAI 26'] This is the official pytorch implementation for paper: Filter, Correlate, Compress: Training-Free Token Reduction for MLLM Accβ¦β44Nov 13, 2025Updated 5 months ago
- β14Jun 22, 2022Updated 3 years ago
- [CVPR2025] Hybrid-Level Instruction Injection for Video Token Compression in Multi-modal Large Language Modelsβ20Apr 30, 2025Updated last year
- Transactions on Multimedia (TMM25)β19Apr 8, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- The codebase for paper "PPT: Token Pruning and Pooling for Efficient Vision Transformer"β29Nov 17, 2024Updated last year
- [ICCV 2025] SparseMM: Head Sparsity Emerges from Visual Concept Responses in MLLMsβ85Jan 17, 2026Updated 3 months ago
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference" and "Spβ¦β260Dec 22, 2025Updated 4 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cacheβ¦β205May 1, 2026Updated last week
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Modelsβ167Mar 8, 2026Updated 2 months ago
- A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.β89Oct 26, 2025Updated 6 months ago
- [CVPR 2025] VASparse: Towards Efficient Visual Hallucination Mitigation via Visual-Aware Token Sparsificationβ50Mar 24, 2025Updated last year