Richar-Du / VirgoLinks
Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*
☆20Updated last week
Alternatives and similar repositories for Virgo
Users that are interested in Virgo are comparing it to the libraries listed below
Sorting:
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆47Updated 5 months ago
- ☆15Updated last month
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆103Updated last week
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆65Updated this week
- ☆77Updated 4 months ago
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆34Updated 10 months ago
- ☆45Updated last month
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆52Updated 6 months ago
- Preference Learning for LLaVA☆45Updated 6 months ago
- The code and data for the paper JiuZhang3.0☆45Updated last year
- [ICML 2025] M-STAR (Multimodal Self-Evolving TrAining for Reasoning) Project. Diving into Self-Evolving Training for Multimodal Reasoning☆60Updated 5 months ago
- ☆99Updated last year
- The official repo for "VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search"☆25Updated last month
- The proposed simulated dataset consisting of 9,536 charts and associated data annotations in CSV format.☆25Updated last year
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆36Updated last month
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆50Updated 7 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆78Updated 4 months ago
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆86Updated 6 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆46Updated last year
- ☆17Updated last year
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 4 months ago
- ☆26Updated 10 months ago
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆47Updated 2 months ago
- ☆91Updated last year
- The official GitHub page for ''What Makes for Good Visual Instructions? Synthesizing Complex Visual Reasoning Instructions for Visual Ins…☆19Updated last year
- [ICCV2023] Official code for "VL-PET: Vision-and-Language Parameter-Efficient Tuning via Granularity Control"☆53Updated last year
- Can Atomic Step Decomposition Enhance the Self-structured Reasoning of Multimodal Large Models?☆23Updated 2 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆115Updated last month
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆74Updated 6 months ago
- MLLM-Bench: Evaluating Multimodal LLMs with Per-sample Criteria☆69Updated 7 months ago