RUCAIBox / Virgo
Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*
☆31Updated this week
Alternatives and similar repositories for Virgo:
Users that are interested in Virgo are comparing it to the libraries listed below
- ☆29Updated last week
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆67Updated last month
- ☆47Updated this week
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆62Updated 7 months ago
- MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale☆25Updated last month
- ☆59Updated 11 months ago
- Official implement of MIA-DPO☆48Updated 2 months ago
- MATH-Vision dataset and code to measure Multimodal Mathematical Reasoning capabilities.☆77Updated 3 months ago
- Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models☆124Updated 3 weeks ago
- [NeurIPS'24] Official PyTorch Implementation of Seeing the Image: Prioritizing Visual Correlation by Contrastive Alignment☆56Updated 3 months ago
- ☆42Updated 5 months ago
- ☆107Updated 5 months ago
- This is the official repo for ByteVideoLLM/Dynamic-VLM☆18Updated 3 weeks ago
- Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision☆58Updated 6 months ago
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆97Updated this week
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆18Updated last year
- MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆58Updated 3 months ago
- A Self-Training Framework for Vision-Language Reasoning☆57Updated last month
- ☆17Updated 10 months ago
- Official Repository of VideoLLaMB: Long Video Understanding with Recurrent Memory Bridges☆59Updated 3 months ago
- ☆92Updated last year
- ☆44Updated 8 months ago
- The official code of the paper "PyramidDrop: Accelerating Your Large Vision-Language Models via Pyramid Visual Redundancy Reduction".☆50Updated this week
- VoCo-LLaMA: This repo is the official implementation of "VoCo-LLaMA: Towards Vision Compression with Large Language Models".☆88Updated 6 months ago
- A bug-free and improved implementation of LLaVA-UHD, based on the code from the official repo☆32Updated 4 months ago
- Official code for "What Makes for Good Visual Tokenizers for Large Language Models?".☆56Updated last year
- Draw-and-Understand: Leveraging Visual Prompts to Enable MLLMs to Comprehend What You Want☆63Updated 2 months ago
- ☆23Updated 5 months ago
- Making LLaVA Tiny via MoE-Knowledge Distillation☆76Updated 2 months ago
- VL-GPT: A Generative Pre-trained Transformer for Vision and Language Understanding and Generation☆84Updated 3 months ago