hulianyuyy / iLLaVALinks
iLLaVA: An Image is Worth Fewer Than 1/3 Input Tokens in Large Multimodal Models
☆20Updated 10 months ago
Alternatives and similar repositories for iLLaVA
Users that are interested in iLLaVA are comparing it to the libraries listed below
Sorting:
- [ICCV 2025] Dynamic-VLM☆26Updated last year
- CLIP-MoE: Mixture of Experts for CLIP☆50Updated last year
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆43Updated 8 months ago
- [ICCV 2025] Official code for "AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning"☆47Updated 2 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆71Updated last month
- ☆46Updated last year
- The official repo for LIFT: Language-Image Alignment with Fixed Text Encoders☆40Updated 6 months ago
- [ICME 2024 Oral] DARA: Domain- and Relation-aware Adapters Make Parameter-efficient Tuning for Visual Grounding☆23Updated 9 months ago
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆31Updated 4 months ago
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆50Updated 6 months ago
- [NeurIPS 2025 Spotlight] Fast-Slow Thinking GRPO for Large Vision-Language Model Reasoning☆21Updated this week
- [ECCV 2024] FlexAttention for Efficient High-Resolution Vision-Language Models☆46Updated 11 months ago
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆40Updated last month
- [EMNLP 2025 Main] Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆44Updated this week
- Official implement of MIA-DPO☆67Updated 10 months ago
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model☆36Updated 11 months ago
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆92Updated last year
- [NeurIPS 2025] Unsupervised Post-Training for Multi-Modal LLM Reasoning via GRPO☆70Updated last month
- [AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models☆36Updated this week
- M2-Reasoning: Empowering MLLMs with Unified General and Spatial Reasoning☆46Updated 5 months ago
- \infty-Video: A Training-Free Approach to Long Video Understanding via Continuous-Time Memory Consolidation☆19Updated 10 months ago
- [NeurIPS 2025] VeriThinker: Learning to Verify Makes Reasoning Model Efficient☆63Updated 2 months ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆61Updated last year
- Official Repository of Personalized Visual Instruct Tuning☆33Updated 9 months ago
- Adapting LLaMA Decoder to Vision Transformer☆30Updated last year
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆38Updated 9 months ago
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆41Updated 8 months ago
- [ICCV 2025] p-MoD: Building Mixture-of-Depths MLLMs via Progressive Ratio Decay☆43Updated 5 months ago
- Official repository of the video reasoning benchmark MMR-V. Can Your MLLMs "Think with Video"?☆36Updated 5 months ago
- Repo for paper "T2Vid: Translating Long Text into Multi-Image is the Catalyst for Video-LLMs"☆48Updated 3 months ago