ArmenJeddi / saintLinks
a training-free approach to accelerate ViTs and VLMs by pruning redundant tokens based on similarity
☆39Updated 5 months ago
Alternatives and similar repositories for saint
Users that are interested in saint are comparing it to the libraries listed below
Sorting:
- Official code for paper: [CLS] Attention is All You Need for Training-Free Visual Token Pruning: Make VLM Inference Faster.☆97Updated 4 months ago
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model☆36Updated 10 months ago
- ☆60Updated 6 months ago
- [CVPR 2025] PACT: Pruning and Clustering-Based Token Reduction for Faster Visual Language Models☆51Updated last month
- CLIP-MoE: Mixture of Experts for CLIP☆50Updated last year
- [ICML'25] Official implementation of paper "SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference".☆194Updated 5 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆42Updated 7 months ago
- [ICCV 2025] Official code for paper: Beyond Text-Visual Attention: Exploiting Visual Cues for Effective Token Pruning in VLMs☆50Updated 4 months ago
- ☆28Updated 8 months ago
- [CVPR 2025] DyCoke: Dynamic Compression of Tokens for Fast Video Large Language Models☆87Updated last month
- Adapting LLaMA Decoder to Vision Transformer☆30Updated last year
- [EMNLP 2025 Main] Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆38Updated last week
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆69Updated 4 months ago
- [CVPR 2025] DivPrune: Diversity-based Visual Token Pruning for Large Multimodal Models☆55Updated 5 months ago
- Code release for VTW (AAAI 2025 Oral)☆61Updated 2 weeks ago
- 📚 Collection of token-level model compression resources.☆180Updated 2 months ago
- iLLaVA: An Image is Worth Fewer Than 1/3 Input Tokens in Large Multimodal Models☆19Updated 9 months ago
- [ICML2025] Official Code of From Local Details to Global Context: Advancing Vision-Language Models with Attention-Based Selection☆24Updated 4 months ago
- [NeurIPS 2025] VeriThinker: Learning to Verify Makes Reasoning Model Efficient☆61Updated last month
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆154Updated last month
- Official repository of InLine attention (NeurIPS 2024)☆56Updated 11 months ago
- [ICML 2025 Oral] Mixture of Lookup Experts☆54Updated 6 months ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆134Updated 8 months ago
- Dimple, the first Discrete Diffusion Multimodal Large Language Model☆109Updated 4 months ago
- [NeurIPS'24]Efficient and accurate memory saving method towards W4A4 large multi-modal models.☆91Updated 10 months ago
- [Findings of EMNLP 2024] AdaMoE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts Language Models☆15Updated last year
- This repo contains the source code for VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks (NeurIPS 2024).☆42Updated last year
- [ICML 2024] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers☆34Updated 10 months ago
- [NeurIPS 24] MoE Jetpack: From Dense Checkpoints to Adaptive Mixture of Experts for Vision Tasks☆132Updated 11 months ago
- [ICML 2025] This is the official PyTorch implementation of "🎵 HarmoniCa: Harmonizing Training and Inference for Better Feature Caching i…☆43Updated 4 months ago