hulianyuyy / iLLaVALinks
iLLaVA: An Image is Worth Fewer Than 1/3 Input Tokens in Large Multimodal Models
☆19Updated 4 months ago
Alternatives and similar repositories for iLLaVA
Users that are interested in iLLaVA are comparing it to the libraries listed below
Sorting:
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆39Updated 2 months ago
- 🚀 Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆23Updated 2 weeks ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆54Updated 7 months ago
- Fast-Slow Thinking for Large Vision-Language Model Reasoning☆15Updated last month
- ☆42Updated 7 months ago
- Autoregressive Semantic Visual Reconstruction Helps VLMs Understand Better☆29Updated last week
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆41Updated 2 weeks ago
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"☆22Updated 2 months ago
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 6 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆42Updated 8 months ago
- Code and data for paper "Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and Mitigation".☆16Updated last month
- ☆21Updated 4 months ago
- VeriThinker: Learning to Verify Makes Reasoning Model Efficient☆47Updated 3 weeks ago
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model☆30Updated 5 months ago
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆22Updated 9 months ago
- [NIPS2023]Implementation of Foundation Model is Efficient Multimodal Multitask Model Selector☆37Updated last year
- ☆49Updated last month
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆37Updated 5 months ago
- Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types☆19Updated 2 months ago
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆18Updated 8 months ago
- Official Repository of Personalized Visual Instruct Tuning☆29Updated 3 months ago
- Adapting LLaMA Decoder to Vision Transformer☆28Updated last year
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆47Updated 3 months ago
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆36Updated 4 months ago
- Official Repository: A Comprehensive Benchmark for Logical Reasoning in MLLMs☆37Updated last week
- GIFT: Generative Interpretable Fine-Tuning☆20Updated 8 months ago
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆54Updated last week
- Official implementation of MC-LLaVA.☆28Updated 3 weeks ago
- [CVPR 2025] Few-shot Recognition via Stage-Wise Retrieval-Augmented Finetuning☆19Updated this week
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆73Updated 2 months ago