hulianyuyy / iLLaVALinks
iLLaVA: An Image is Worth Fewer Than 1/3 Input Tokens in Large Multimodal Models
☆19Updated 6 months ago
Alternatives and similar repositories for iLLaVA
Users that are interested in iLLaVA are comparing it to the libraries listed below
Sorting:
- [ICCV 2025] Dynamic-VLM☆23Updated 7 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆42Updated 9 months ago
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆34Updated last month
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆57Updated 9 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆49Updated 3 weeks ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆41Updated 3 months ago
- ☆43Updated 9 months ago
- Official implement of MIA-DPO☆63Updated 6 months ago
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆20Updated 9 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆45Updated last month
- Official code for paper "GRIT: Teaching MLLMs to Think with Images"☆115Updated this week
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆72Updated this week
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆37Updated 5 months ago
- Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better☆36Updated last month
- Fast-Slow Thinking for Large Vision-Language Model Reasoning☆17Updated 3 months ago
- Adapting LLaMA Decoder to Vision Transformer☆29Updated last year
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆44Updated 2 weeks ago
- [ICCV 2025] Official code for paper: Beyond Text-Visual Attention: Exploiting Visual Cues for Effective Token Pruning in VLMs☆19Updated last month
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆26Updated 7 months ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆36Updated 4 months ago
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆69Updated 3 months ago
- Code for "AVG-LLaVA: A Multimodal Large Model with Adaptive Visual Granularity"☆30Updated 9 months ago
- ☆54Updated 3 months ago
- [ECCV 2024] FlexAttention for Efficient High-Resolution Vision-Language Models☆41Updated 7 months ago
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"☆22Updated 3 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆49Updated 4 months ago
- Mitigating Shortcuts in Visual Reasoning with Reinforcement Learning☆33Updated last month
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆47Updated 7 months ago
- Official Repository of Personalized Visual Instruct Tuning☆32Updated 5 months ago
- M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning☆35Updated 3 weeks ago