armenjeddi / saintLinks
a training-free approach to accelerate ViTs and VLMs by pruning redundant tokens based on similarity
☆41Updated 7 months ago
Alternatives and similar repositories for saint
Users that are interested in saint are comparing it to the libraries listed below
Sorting:
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆101Updated 6 months ago
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference" and "Sp…☆221Updated 3 weeks ago
- ☆28Updated 10 months ago
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model☆36Updated last year
- [CVPR 2025] PACT: Pruning and Clustering-Based Token Reduction for Faster Visual Language Models☆54Updated 3 months ago
- [ICCV 2025] Official code for paper: Beyond Text-Visual Attention: Exploiting Visual Cues for Effective Token Pruning in VLMs☆56Updated 6 months ago
- ☆62Updated 8 months ago
- [CVPR 2025] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models☆95Updated last month
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆44Updated 8 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆51Updated last year
- [EMNLP 2025 main 🔥] Code for "Stop Looking for Important Tokens in Multimodal Language Models: Duplication Matters More"☆97Updated 3 months ago
- [ICML2025] Official Code of From Local Details to Global Context: Advancing Vision-Language Models with Attention-Based Selection☆25Updated 6 months ago
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆69Updated 6 months ago
- Code release for VTW (AAAI 2025 Oral)☆65Updated 2 months ago
- [NeurIPS'25] HoliTom: Holistic Token Merging for Fast Video Large Language Models☆68Updated 3 months ago
- [EMNLP 2025 Main] Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆49Updated this week
- [ICML 2025 Oral] Mixture of Lookup Experts☆61Updated last month
- [NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks☆133Updated last year
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆158Updated 3 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆137Updated 10 months ago
- Adapting LLaMA Decoder to Vision Transformer☆30Updated last year
- [NAACL 2025🔥] MEDA: Dynamic KV Cache Allocation for Efficient Multimodal Long-Context Inference☆16Updated 6 months ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆93Updated last year
- [ICML 2024] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers☆34Updated last year
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆52Updated last year
- Official repository of InLine attention (NeurIPS 2024)☆57Updated last year
- [NeurIPS 2025] VeriThinker: Learning to Verify Makes Reasoning Model Efficient☆64Updated 3 months ago
- [2025] Efficient Vision Language Models: A Survey☆45Updated 5 months ago
- ☆30Updated last year
- [ICCV'25] The official code of paper "Combining Similarity and Importance for Video Token Reduction on Large Visual Language Models"☆67Updated last month