Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.
☆110Jun 29, 2025Updated 9 months ago
Alternatives and similar repositories for FasterVLM
Users that are interested in FasterVLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model☆36Jan 8, 2025Updated last year
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆569Jan 4, 2025Updated last year
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆143Mar 6, 2025Updated last year
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference" and "Sp…☆258Dec 22, 2025Updated 3 months ago
- A paper list of some recent works about Token Compress for Vit and VLM☆881Updated this week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- 😎 Awesome papers on token redundancy reduction☆11Mar 12, 2025Updated last year
- ☆36Jun 3, 2025Updated 10 months ago
- Official repository for VisionZip (CVPR 2025)☆421Jul 21, 2025Updated 8 months ago
- [CVPR 2025] PACT: Pruning and Clustering-Based Token Reduction for Faster Visual Language Models☆59Jan 30, 2026Updated 2 months ago
- Code release for VTW (AAAI 2025 Oral)☆65Nov 4, 2025Updated 5 months ago
- ☆28Jan 27, 2025Updated last year
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Cont…☆71Sep 18, 2025Updated 6 months ago
- [AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models☆41Jan 27, 2026Updated 2 months ago
- [CVPR 2025] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models☆106Nov 22, 2025Updated 4 months ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [ICCV 2025] Official code for paper: Beyond Text-Visual Attention: Exploiting Visual Cues for Effective Token Pruning in VLMs☆77Jul 1, 2025Updated 9 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆45Apr 18, 2025Updated 11 months ago
- Pruning the VLLMs☆106Dec 9, 2024Updated last year
- ☆66Jan 23, 2026Updated 2 months ago
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆57Oct 9, 2025Updated 6 months ago
- [EMNLP 2025 main 🔥] Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆115Oct 12, 2025Updated 6 months ago
- [CVPR 2025] DivPrune: Diversity-based Visual Token Pruning for Large Multimodal Models☆74Dec 1, 2025Updated 4 months ago
- [NeurIPS 2025] FastVID: Dynamic Density Pruning for Fast Video Large Language Models☆33Nov 10, 2025Updated 5 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆168Nov 6, 2024Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆53Jun 12, 2025Updated 10 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆203Jun 18, 2025Updated 9 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆67Feb 19, 2025Updated last year
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆36Oct 3, 2024Updated last year
- [ACL 2025] PruneVid: Visual Token Pruning for Efficient Video Large Language Models☆71May 15, 2025Updated 11 months ago
- [CVPR '24] Official implementation of the paper "Multiflow: Shifting Towards Task-Agnostic Vision-Language Pruning".☆23Mar 7, 2025Updated last year
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆206Jul 17, 2025Updated 9 months ago
- a training-free approach to accelerate ViTs and VLMs by pruning redundant tokens based on similarity☆44May 24, 2025Updated 10 months ago
- [NeurIPS 2024] Mitigating Object Hallucination via Concentric Causal Attention☆66Aug 30, 2025Updated 7 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- HallE-Control: Controlling Object Hallucination in LMMs☆32Apr 10, 2024Updated 2 years ago
- Project Page for GaussianFormer☆24May 30, 2024Updated last year
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆166Mar 8, 2026Updated last month
- [CVPR2025] FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression☆64Oct 10, 2025Updated 6 months ago
- [NAACL 2025🔥] MEDA: Dynamic KV Cache Allocation for Efficient Multimodal Long-Context Inference☆19Jun 19, 2025Updated 9 months ago
- The official repo for "Where do Large Vision-Language Models Look at when Answering Questions?"☆63Jan 7, 2026Updated 3 months ago
- [NeurIPS 2025] Official repository for “FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models”☆30Dec 9, 2025Updated 4 months ago