mbzuai-oryx / LlamaV-o1Links
[ACL 2025 🔥] Rethinking Step-by-step Visual Reasoning in LLMs
☆310Updated 6 months ago
Alternatives and similar repositories for LlamaV-o1
Users that are interested in LlamaV-o1 are comparing it to the libraries listed below
Sorting:
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆229Updated last month
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆211Updated 11 months ago
- Codes for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models☆269Updated 4 months ago
- Pixel-Level Reasoning Model trained with RL [NeuIPS25]☆254Updated last month
- Official Code for "Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search"☆372Updated 2 months ago
- Long Context Transfer from Language to Vision☆398Updated 8 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024]☆234Updated 8 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆122Updated 4 months ago
- Fully Open Framework for Democratized Multimodal Training☆640Updated last week
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆246Updated last year
- ☆215Updated last year
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆179Updated 8 months ago
- LLaVA-UHD v3: Progressive Visual Compression for Efficient Native-Resolution Encoding in MLLMs☆399Updated last week
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆361Updated 4 months ago
- [COLM'25] Official implementation of the Law of Vision Representation in MLLMs☆171Updated 2 months ago
- ☆105Updated 5 months ago
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆159Updated last year
- An open source implementation of CLIP (With TULIP Support)☆163Updated 6 months ago
- OpenThinkIMG is an end-to-end open-source framework that empowers LVLMs to think with images.☆329Updated 6 months ago
- Explore the Multimodal “Aha Moment” on 2B Model☆619Updated 8 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆331Updated last year
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆168Updated 6 months ago
- 🚀ReVisual-R1 is a 7B open-source multimodal language model that follows a three-stage curriculum—cold-start pre-training, multimodal rei…☆193Updated last month
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆200Updated last year
- Visual Planning: Let's Think Only with Images☆283Updated 6 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆147Updated last year
- The Next Step Forward in Multimodal LLM Alignment☆189Updated 7 months ago
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆354Updated 5 months ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆357Updated 2 weeks ago
- [NIPS2025] VideoChat-R1 & R1.5: Enhancing Spatio-Temporal Perception and Reasoning via Reinforcement Fine-Tuning☆238Updated last month