π Collection of token-level model compression resources.
β193Sep 3, 2025Updated 6 months ago
Alternatives and similar repositories for Awesome-Token-level-Model-Compression
Users that are interested in Awesome-Token-level-Model-Compression are comparing it to the libraries listed below
Sorting:
- [AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Modelsβ38Jan 27, 2026Updated last month
- π Collection of awesome generation acceleration resources.β390Jul 7, 2025Updated 8 months ago
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Modelβ37Jan 8, 2025Updated last year
- [ICME 2024 Oral] DARA: Domain- and Relation-aware Adapters Make Parameter-efficient Tuning for Visual Groundingβ23Feb 26, 2025Updated last year
- Official PyTorch code for ICLR 2025 paper "Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Models"β24Mar 4, 2025Updated last year
- (ICLR 2026 π₯) Code for "The Devil behind the mask: An emergent safety vulnerability of Diffusion LLMs"β74Feb 9, 2026Updated last month
- [AAAI-2025] The offical code for SiTo οΌSimilarity-based Token Pruning for Stable Diffusion ModelsοΌβ43Jun 2, 2025Updated 9 months ago
- [ICASSP 2024] VGDiffZero: Text-to-image Diffusion Models Can Be Zero-shot Visual Groundersβ17Feb 11, 2025Updated last year
- A paper list of some recent works about Token Compress for Vit and VLMβ843Updated this week
- [EMNLP 2025 Main] Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Modelsβ64Updated this week
- [ICLR2025] Accelerating Diffusion Transformers with Token-wise Feature Cachingβ210Mar 14, 2025Updated 11 months ago
- [ICCV2025] From Reusing to Forecasting: Accelerating Diffusion Models with TaylorSeersβ376Mar 2, 2026Updated last week
- [ICCV'25] The official code of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"β71Jan 13, 2026Updated last month
- β67Jan 23, 2026Updated last month
- β31Jun 14, 2024Updated last year
- FORA introduces simple yet effective caching mechanism in Diffusion Transformer Architecture for faster inference sampling.β52Jul 8, 2024Updated last year
- A collection of token reduction (token pruning, merging, clustering, etc.) techniques for ML/AIβ343Updated this week
- Official repository for VisionZip (CVPR 2025)β410Jul 21, 2025Updated 7 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visioβ¦β44Apr 18, 2025Updated 10 months ago
- [NeurIPS 2025] HoliTom: Holistic Token Merging for Fast Video Large Language Modelsβ71Oct 10, 2025Updated 4 months ago
- The official repo for "CodeScaler: Scaling Code LLM Training and Test-Time Inference via Execution-Free Reward Models"β30Updated this week
- [CVPR 2025] VASparse: Towards Efficient Visual Hallucination Mitigation via Visual-Aware Token Sparsificationβ49Mar 24, 2025Updated 11 months ago
- [AAAI 26'] This is the official pytorch implementation for paper: Filter, Correlate, Compress: Training-Free Token Reduction for MLLM Accβ¦β58Nov 13, 2025Updated 3 months ago
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.β107Jun 29, 2025Updated 8 months ago
- The codebase for paper "PPT: Token Pruning and Pooling for Efficient Vision Transformer"β28Nov 17, 2024Updated last year
- π Awesome papers on token redundancy reductionβ11Mar 12, 2025Updated 11 months ago
- Exploiting Inter-sample and Inter-feature Relations in Dataset Distillation (CVPR24)β11Jun 16, 2024Updated last year
- β49Mar 3, 2024Updated 2 years ago
- β10Oct 20, 2023Updated 2 years ago
- TokLIP: Marry Visual Tokens to CLIP for Multimodal Comprehension and Generationβ236Aug 18, 2025Updated 6 months ago
- [ICCV 2025] SparseMM: Head Sparsity Emerges from Visual Concept Responses in MLLMsβ82Jan 17, 2026Updated last month
- Code release for VTW (AAAI 2025 Oral)β64Nov 4, 2025Updated 4 months ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Modelsβ166Sep 27, 2025Updated 5 months ago
- Official PyTorch implementation of the paper "dLLM-Cache: Accelerating Diffusion Large Language Models with Adaptive Caching" (dLLM-Cacheβ¦β198Nov 17, 2025Updated 3 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reductionβ142Mar 6, 2025Updated last year
- [CVPR2025] Hybrid-Level Instruction Injection for Video Token Compression in Multi-modal Large Language Modelsβ19Apr 30, 2025Updated 10 months ago
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference" and "Spβ¦β243Dec 22, 2025Updated 2 months ago
- Official code for MotionBench (CVPR 2025)β66Mar 3, 2025Updated last year
- [ICCV 2025] ONLY: One-Layer Intervention Sufficiently Mitigates Hallucinations in Large Vision-Language Modelsβ49Jul 7, 2025Updated 8 months ago