PKU-YuanGroup / LLaVA-o1Links
☆56Updated last year
Alternatives and similar repositories for LLaVA-o1
Users that are interested in LLaVA-o1 are comparing it to the libraries listed below
Sorting:
- Code for Paper: Harnessing Webpage Uis For Text Rich Visual Understanding☆53Updated last year
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆126Updated 5 months ago
- [NeurIPS 2025] Elevating Visual Perception in Multimodal LLMs with Visual Embedding Distillation☆68Updated 2 months ago
- The official repository of "R-4B: Incentivizing General-Purpose Auto-Thinking Capability in MLLMs via Bi-Mode Integration"☆131Updated 4 months ago
- ☆69Updated last year
- Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs☆98Updated last year
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆212Updated last year
- ☆67Updated 9 months ago
- Geometric-Mean Policy Optimization☆96Updated last month
- This is the repo for the paper "PANGEA: A FULLY OPEN MULTILINGUAL MULTIMODAL LLM FOR 39 LANGUAGES"☆117Updated 6 months ago
- ☆41Updated 7 months ago
- ☆226Updated 10 months ago
- The open-source code of MetaStone-S1.☆106Updated 5 months ago
- ☆23Updated last year
- ☆71Updated last year
- ☆68Updated 3 months ago
- [ACL2025 Findings] Benchmarking Multihop Multimodal Internet Agents☆47Updated 10 months ago
- [IEEE VIS 2024] LLaVA-Chart: Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruc…☆73Updated 11 months ago
- ☆105Updated 7 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆97Updated last year
- ☆35Updated 11 months ago
- ☆75Updated last year
- [ACL 2025 🔥] Rethinking Step-by-step Visual Reasoning in LLMs☆310Updated 7 months ago
- The official repo for “Unleashing the Reasoning Potential of Pre-trained LLMs by Critique Fine-Tuning on One Problem” [EMNLP25]☆33Updated 4 months ago
- [ACL 2025] A Generalizable and Purely Unsupervised Self-Training Framework☆71Updated 7 months ago
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆187Updated 9 months ago
- ☆84Updated 9 months ago
- (WACV 2025 - Oral) Vision-language conversation in 10 languages including English, Chinese, French, Spanish, Russian, Japanese, Arabic, H…☆84Updated 5 months ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆128Updated last year
- THOUGHTSCULPT, a general reasoning and search method for complex tasks☆13Updated last year