[ACL 2025 π₯] Rethinking Step-by-step Visual Reasoning in LLMs
β310May 21, 2025Updated 10 months ago
Alternatives and similar repositories for LlamaV-o1
Users that are interested in LlamaV-o1 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoningβ2,131Dec 12, 2025Updated 3 months ago
- [CVPRW-25 MMFM] Official repository of paper titled "How Good is my Video LMM? Complex Video Reasoning and Robustness Evaluation Suite foβ¦β50Aug 23, 2024Updated last year
- A fork to add multimodal model training to open-r1β1,514Feb 8, 2025Updated last year
- [CVPR2025 Highlight] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Modelsβ238Nov 7, 2025Updated 4 months ago
- [CVPR 2025 π₯]A Large Multimodal Model for Pixel-Level Visual Grounding in Videosβ99Apr 14, 2025Updated 11 months ago
- DigitalOcean Gradient AI Platform β’ AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- (ICCV 2023) Generative Multiplane Neural Radiance for 3D Aware Image Generation.β19Sep 28, 2023Updated 2 years ago
- [ACL 2025 π₯] Time Travel is a Comprehensive Benchmark to Evaluate LMMs on Historical and Cultural Artifactsβ19May 22, 2025Updated 10 months ago
- β112Jan 8, 2025Updated last year
- [MICCAI 2025] Hierarchical Self-Supervised Adversarial Training for Robust Vision Models in Histopathologyβ12Jun 17, 2025Updated 9 months ago
- [MICCAI 2023][Early Accept] Official code repository of paper titled "Cross-modulated Few-shot Image Generation for Colorectal Tissue Claβ¦β47Sep 28, 2023Updated 2 years ago
- [CVPR 2025 π₯] ALM-Bench is a multilingual multi-modal diverse cultural benchmark for 100 languages across 19 categories. It assesses theβ¦β46May 26, 2025Updated 10 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought β¦β440Dec 22, 2024Updated last year
- Learnable Weight Initialization for Volumetric Medical Image Segmentation [Elsevier AIM2024]β22Oct 27, 2024Updated last year
- [ICCVW 2025 (Oral)] Robust-LLaVA: On the Effectiveness of Large-Scale Robust Image Encoders for Multi-modal Large Language Modelsβ29Oct 20, 2025Updated 5 months ago
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- Explore the Multimodal βAha Momentβ on 2B Modelβ623Mar 18, 2025Updated last year
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β951Aug 5, 2025Updated 7 months ago
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β382Feb 23, 2025Updated last year
- [CVPRW 2025] Official repository of paper titled "Towards Evaluating the Robustness of Visual State Space Models"β26Jun 8, 2025Updated 9 months ago
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learningβ772Sep 7, 2025Updated 6 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*β109May 27, 2025Updated 10 months ago
- ARB: A Comprehensive Arabic Multimodal Reasoning Benchmarkβ17May 25, 2025Updated 10 months ago
- [NAACL 2025 π₯] CAMEL-Bench is an Arabic benchmark for evaluating multimodal models across eight domains with 29,000 questions.β37Apr 17, 2025Updated 11 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Modelsβ87Oct 26, 2025Updated 5 months ago
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- [ICLR2026] This is the first paper to explore how to effectively use R1-like RL for MLLMs and introduce Vision-R1, a reasoning MLLM thatβ¦β1,094Mar 20, 2026Updated last week
- MM-Instruct: Generated Visual Instructions for Large Multimodal Model Alignmentβ35Jul 1, 2024Updated last year
- Official repo and evaluation implementation of VSI-Benchβ691Aug 5, 2025Updated 7 months ago
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.β846May 14, 2025Updated 10 months ago
- [EMNLP'23] ClimateGPT: a specialized LLM for conversations related to Climate Change and Sustainability topics in both English and Arabiβ¦β79Sep 24, 2024Updated last year
- β99Jun 23, 2025Updated 9 months ago
- Composed Video Retrievalβ63May 2, 2024Updated last year
- VideoMathQA is a benchmark designed to evaluate mathematical reasoning in real-world educational videosβ23Jan 26, 2026Updated 2 months ago
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understandingβ292Aug 5, 2025Updated 7 months ago
- DigitalOcean Gradient AI Platform β’ AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.β1,995Nov 7, 2025Updated 4 months ago
- β47Dec 30, 2024Updated last year
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Modelsβ153Dec 5, 2024Updated last year
- [BMVC 2025] Official Implementation of the paper "PerSense: Personalized Instance Segmentation in Dense Images"β28Dec 18, 2025Updated 3 months ago
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]β21Feb 27, 2025Updated last year
- Solve Visual Understanding with Reinforced VLMsβ5,898Mar 12, 2026Updated 2 weeks ago
- Official Repo For Pixel-LLM Codebaseβ1,565Feb 27, 2026Updated last month