hulianyuyy / iLLaVA
iLLaVA: An Image is Worth Fewer Than 1/3 Input Tokens in Large Multimodal Models
☆18Updated 3 months ago
Alternatives and similar repositories for iLLaVA
Users that are interested in iLLaVA are comparing it to the libraries listed below
Sorting:
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆20Updated 4 months ago
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆32Updated 3 weeks ago
- ☆41Updated 6 months ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆52Updated 6 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆32Updated 7 months ago
- [CVPR 2025] Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training☆40Updated last month
- Official Repository of Personalized Visual Instruct Tuning☆28Updated 2 months ago
- Multi-Stage Vision Token Dropping: Towards Efficient Multimodal Large Language Model☆28Updated 4 months ago
- Official repository of "CoMP: Continual Multimodal Pre-training for Vision Foundation Models"☆25Updated last month
- [CVPR 2025] PVC: Progressive Visual Token Compression for Unified Image and Video Processing in Large Vision-Language Models☆39Updated 2 months ago
- [CVPR 2024] The official implementation of paper "synthesize, diagnose, and optimize: towards fine-grained vision-language understanding"☆42Updated 2 months ago
- ☆44Updated last week
- Official project page of "HiMix: Reducing Computational Complexity in Large Vision-Language Models"☆10Updated 3 months ago
- [EMNLP 2024] Official code for "Beyond Embeddings: The Promise of Visual Table in Multi-Modal Models"☆17Updated 6 months ago
- Scaling Multi-modal Instruction Fine-tuning with Tens of Thousands Vision Task Types☆18Updated 3 weeks ago
- Look, Compare, Decide: Alleviating Hallucination in Large Vision-Language Models via Multi-View Multi-Path Reasoning☆21Updated 8 months ago
- ☆14Updated 7 months ago
- Official implementation of MC-LLaVA.☆26Updated 3 months ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆57Updated 7 months ago
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆34Updated 4 months ago
- VisRL: Intention-Driven Visual Perception via Reinforced Reasoning☆27Updated last month
- [ICLR2025] γ -MOD: Mixture-of-Depth Adaptation for Multimodal Large Language Models☆36Updated 2 months ago
- official repo for paper "[CLS] Token Tells Everything Needed for Training-free Efficient MLLMs"☆20Updated 2 weeks ago
- This repo contains evaluation code for the paper "AV-Odyssey: Can Your Multimodal LLMs Really Understand Audio-Visual Information?"☆24Updated 4 months ago
- Official implementation of Next Block Prediction: Video Generation via Semi-Autoregressive Modeling☆31Updated 3 months ago
- ☆41Updated 4 months ago
- Official implementation of paper ReTaKe: Reducing Temporal and Knowledge Redundancy for Long Video Understanding☆33Updated last month
- [ICML 2024] Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning☆49Updated last year
- Adapting LLaMA Decoder to Vision Transformer☆28Updated 11 months ago
- Code for paper: Unified Text-to-Image Generation and Retrieval☆15Updated 10 months ago