mbzuai-oryx / LlamaV-o1
Rethinking Step-by-step Visual Reasoning in LLMs
☆289Updated 3 months ago
Alternatives and similar repositories for LlamaV-o1:
Users that are interested in LlamaV-o1 are comparing it to the libraries listed below
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆184Updated 3 weeks ago
- Explore the Multimodal “Aha Moment” on 2B Model☆577Updated last month
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆322Updated last month
- Codes for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models☆199Updated 5 months ago
- Long Context Transfer from Language to Vision☆373Updated last month
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆201Updated 3 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆374Updated this week
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR2024]☆214Updated last month
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆319Updated 9 months ago
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆236Updated 8 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆298Updated 4 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆175Updated 7 months ago
- R1-onevision, a visual language model capable of deep CoT reasoning.☆506Updated last week
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆152Updated last month
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆92Updated this week
- MM-EUREKA: Exploring Visual Aha Moment with Rule-based Large-scale Reinforcement Learning☆574Updated this week
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆154Updated 7 months ago
- Official repository of "GoT: Unleashing Reasoning Capability of Multimodal Large Language Model for Visual Generation and Editing"☆230Updated last month
- E5-V: Universal Embeddings with Multimodal Large Language Models☆241Updated 4 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆100Updated 2 months ago
- Official implementation of the Law of Vision Representation in MLLMs☆154Updated 5 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆469Updated this week
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆506Updated last month
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆132Updated 5 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆155Updated 6 months ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆417Updated 3 months ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆290Updated 2 months ago
- An open source implementation of CLIP (With TULIP Support)☆132Updated last month
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models☆216Updated 7 months ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆299Updated this week