ictnlp / LLaVA-Mini
LLaVA-Mini is a unified large multimodal model (LMM) that can support the understanding of images, high-resolution images, and videos in an efficient manner.
☆474Updated 4 months ago
Alternatives and similar repositories for LLaVA-Mini
Users that are interested in LLaVA-Mini are comparing it to the libraries listed below
Sorting:
- [CVPR 2024 Highlight🔥] Chat-UniVi: Unified Visual Representation Empowers Large Language Models with Image and Video Understanding☆937Updated 7 months ago
- ☆400Updated 9 months ago
- Explore the Multimodal “Aha Moment” on 2B Model☆586Updated last month
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models☆219Updated 8 months ago
- Rethinking Step-by-step Visual Reasoning in LLMs☆293Updated 3 months ago
- Long Context Transfer from Language to Vision☆374Updated last month
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆515Updated this week
- [ICML 2025] Official PyTorch implementation of LongVU☆370Updated last week
- R1-onevision, a visual language model capable of deep CoT reasoning.☆515Updated last month
- A curated list of research based on CLIP.☆217Updated 5 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆376Updated 3 weeks ago
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆189Updated last month
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆321Updated 10 months ago
- 💡 VideoMind: A Chain-of-LoRA Agent for Long Video Reasoning☆191Updated 3 weeks ago
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"☆341Updated last month
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆529Updated this week
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆201Updated 4 months ago
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆534Updated 6 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆156Updated 7 months ago
- This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-sta…☆559Updated last week
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆597Updated 2 weeks ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆158Updated 4 months ago
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆237Updated 9 months ago
- 🔥🔥First-ever hour scale video understanding models☆314Updated 3 weeks ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆363Updated this week
- MM-IFEngine: Towards Multimodal Instruction Following☆84Updated 2 weeks ago
- A fork to add multimodal model training to open-r1☆1,255Updated 3 months ago
- ☆359Updated 3 months ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆369Updated 3 weeks ago
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆185Updated 3 months ago