hulianyuyy / iLLaVALinks
iLLaVA: An Image is Worth Fewer Than 1/3 Input Tokens in Large Multimodal Models
☆19Updated 4 months ago
Alternatives and similar repositories for iLLaVA
Users that are interested in iLLaVA are comparing it to the libraries listed below
Sorting:
- Video Compression Commander: Plug-and-Play Inference Acceleration for Video Large Language Models☆22Updated this week
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆40Updated 3 months ago
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 5 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆46Updated 2 months ago
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"☆21Updated last month
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model☆31Updated 4 months ago
- ☆12Updated this week
- Official implementation of MC-LLaVA.☆28Updated this week
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆53Updated 7 months ago
- LEO: A powerful Hybrid Multimodal LLM☆18Updated 4 months ago
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆18Updated 7 months ago
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆26Updated 2 months ago
- Fast-Slow Thinking for Large Vision-Language Model Reasoning☆14Updated last month
- ☆42Updated 6 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆36Updated last month
- [CVPR2025] BOLT: Boost Large Vision-Language Model Without Training for Long-form Video Understanding☆20Updated 2 months ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆34Updated 2 months ago
- ☆46Updated last month
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆36Updated 3 months ago
- Official project page of "HiMix: Reducing Computational Complexity in Large Vision-Language Models"☆11Updated 4 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆37Updated 7 months ago
- ☆84Updated 2 months ago
- TinyLLaVA-Video-R1: Towards Smaller LMMs for Video Reasoning☆72Updated 2 weeks ago
- Official Repository of Personalized Visual Instruct Tuning☆28Updated 3 months ago
- [NIPS2023]Implementation of Foundation Model is Efficient Multimodal Multitask Model Selector☆36Updated last year
- Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types☆18Updated last month
- ☆17Updated 3 months ago
- Code and data for paper "Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and Mitigation".☆16Updated 3 weeks ago
- (CVPR 2025) PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction☆105Updated 3 months ago
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understanding☆66Updated last month