mbzuai-oryx / LlamaV-o1
Rethinking Step-by-step Visual Reasoning in LLMs
☆259Updated last month
Alternatives and similar repositories for LlamaV-o1:
Users that are interested in LlamaV-o1 are comparing it to the libraries listed below
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆194Updated last month
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆302Updated this week
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR2024]☆202Updated this week
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆152Updated 2 months ago
- [CVPR2025] Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆136Updated this week
- Long Context Transfer from Language to Vision☆364Updated 3 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Feature Pyramid via Hierarchical Window Transformer☆367Updated last month
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆315Updated 7 months ago
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models☆203Updated 5 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆246Updated 2 months ago
- Codes for Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models☆164Updated 4 months ago
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆167Updated 5 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆233Updated 2 months ago
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆143Updated 8 months ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation☆232Updated last month
- Official implementation of the Law of Vision Representation in MLLMs☆150Updated 3 months ago
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆388Updated last month
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆91Updated this week
- [AAAI-25] Cobra: Extending Mamba to Multi-modal Large Language Model for Efficient Inference☆266Updated last month
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆192Updated this week
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆230Updated 6 months ago
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆142Updated last month
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆130Updated 3 months ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆369Updated 2 months ago
- a family of highly capabale yet efficient large multimodal models☆176Updated 6 months ago
- This is the official code of VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding (ECCV 2024)☆171Updated 2 months ago
- This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆127Updated 8 months ago
- This is the official implementation of "Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams"☆164Updated 2 months ago
- Matryoshka Multimodal Models☆97Updated last month