Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.
☆112Jun 29, 2025Updated 10 months ago
Alternatives and similar repositories for FasterVLM
Users that are interested in FasterVLM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model☆36Jan 8, 2025Updated last year
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆575Jan 4, 2025Updated last year
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆145Mar 6, 2025Updated last year
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference" and "Sp…☆260Dec 22, 2025Updated 4 months ago
- A paper list of some recent works about Token Compress for Vit and VLM☆891Apr 14, 2026Updated 3 weeks ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- 😎 Awesome papers on token redundancy reduction☆11Mar 12, 2025Updated last year
- ☆36Jun 3, 2025Updated 11 months ago
- Official repository for VisionZip (CVPR 2025)☆427Jul 21, 2025Updated 9 months ago
- [CVPR 2025] PACT: Pruning and Clustering-Based Token Reduction for Faster Visual Language Models☆59Jan 30, 2026Updated 3 months ago
- Code release for VTW (AAAI 2025 Oral)☆67Nov 4, 2025Updated 6 months ago
- ☆29Jan 27, 2025Updated last year
- [ICLR 2025] The official pytorch implement of "Dynamic-LLaVA: Efficient Multimodal Large Language Models via Dynamic Vision-language Cont…☆72Sep 18, 2025Updated 7 months ago
- [CVPR 2025] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models☆108Nov 22, 2025Updated 5 months ago
- [AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models☆42Jan 27, 2026Updated 3 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [ICCV 2025] Official code for paper: Beyond Text-Visual Attention: Exploiting Visual Cues for Effective Token Pruning in VLMs☆78Jul 1, 2025Updated 10 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆45Apr 18, 2025Updated last year
- Pruning the VLLMs☆105Dec 9, 2024Updated last year
- ☆66Jan 23, 2026Updated 3 months ago
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆59Oct 9, 2025Updated 6 months ago
- ☆30Feb 27, 2025Updated last year
- [EMNLP 2025 main 🔥] Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆117Oct 12, 2025Updated 6 months ago
- [CVPR 2025] DivPrune: Diversity-based Visual Token Pruning for Large Multimodal Models☆77Apr 16, 2026Updated 3 weeks ago
- [NeurIPS 2025] FastVID: Dynamic Density Pruning for Fast Video Large Language Models☆35Nov 10, 2025Updated 5 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆169Nov 6, 2024Updated last year
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆204Jun 18, 2025Updated 10 months ago
- [NeurIPS 2024] Efficient Large Multi-modal Models via Visual Context Compression☆67Feb 19, 2025Updated last year
- ☆19Aug 6, 2025Updated 9 months ago
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆36Oct 3, 2024Updated last year
- [ACL 2025] PruneVid: Visual Token Pruning for Efficient Video Large Language Models☆71May 15, 2025Updated 11 months ago
- [CVPR '24] Official implementation of the paper "Multiflow: Shifting Towards Task-Agnostic Vision-Language Pruning".☆23Mar 7, 2025Updated last year
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'☆206Jul 17, 2025Updated 9 months ago
- a training-free approach to accelerate ViTs and VLMs by pruning redundant tokens based on similarity☆44May 24, 2025Updated 11 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [NeurIPS 2024] Mitigating Object Hallucination via Concentric Causal Attention☆66Aug 30, 2025Updated 8 months ago
- HallE-Control: Controlling Object Hallucination in LMMs☆32Apr 10, 2024Updated 2 years ago
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆167Mar 8, 2026Updated last month
- [CVPR2025] FlashSloth: Lightning Multimodal Large Language Models via Embedded Visual Compression☆65Oct 10, 2025Updated 6 months ago
- Project Page for GaussianFormer☆24May 30, 2024Updated last year
- [NeurIPS 2025] Official repository for “FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models”☆31Dec 9, 2025Updated 4 months ago
- The official repo for "Where do Large Vision-Language Models Look at when Answering Questions?"☆63Jan 7, 2026Updated 4 months ago