mbzuai-oryx / LlamaV-o1Links
[ACL 2025 🔥] Rethinking Step-by-step Visual Reasoning in LLMs
☆305Updated 3 months ago
Alternatives and similar repositories for LlamaV-o1
Users that are interested in LlamaV-o1 are comparing it to the libraries listed below
Sorting:
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆221Updated 2 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆211Updated 8 months ago
- Pixel-Level Reasoning Model trained with RL☆204Updated last week
- Explore the Multimodal “Aha Moment” on 2B Model☆607Updated 5 months ago
- OpenThinkIMG is an end-to-end open-source framework that empowers LVLMs to think with images.☆299Updated 3 months ago
- Long Context Transfer from Language to Vision☆393Updated 5 months ago
- Codes for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models☆255Updated last month
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆349Updated last month
- ☆211Updated last year
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆169Updated 6 months ago
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆240Updated last year
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆194Updated 11 months ago
- The Next Step Forward in Multimodal LLM Alignment☆178Updated 4 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆227Updated 5 months ago
- Visual Planning: Let's Think Only with Images☆270Updated 3 months ago
- VideoChat-R1: Enhancing Spatio-Temporal Perception via Reinforcement Fine-Tuning☆184Updated 3 weeks ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆370Updated 8 months ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆162Updated 8 months ago
- Official implementation of the Law of Vision Representation in MLLMs☆166Updated 10 months ago
- [CVPR'2025] VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆189Updated 2 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆132Updated 4 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆410Updated 4 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆108Updated last month
- Official Code for "Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search"☆186Updated this week
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆166Updated 11 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆685Updated last week
- 🚀ReVisual-R1 is a 7B open-source multimodal language model that follows a three-stage curriculum—cold-start pre-training, multimodal rei…☆178Updated 2 months ago
- An open source implementation of CLIP (With TULIP Support)☆162Updated 4 months ago
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search too…☆310Updated 3 weeks ago
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models☆263Updated last year