linjh1118 / Awesome-MLLM-For-GamesLinks
MLLM @ Game
☆14Updated 4 months ago
Alternatives and similar repositories for Awesome-MLLM-For-Games
Users that are interested in Awesome-MLLM-For-Games are comparing it to the libraries listed below
Sorting:
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 4 months ago
- ☆90Updated last year
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆143Updated 6 months ago
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated last year
- A Self-Training Framework for Vision-Language Reasoning☆86Updated 8 months ago
- (ICLR'25) A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents☆86Updated 8 months ago
- The Next Step Forward in Multimodal LLM Alignment☆181Updated 5 months ago
- Offical Repository of "AtomThink: Multimodal Slow Thinking with Atomic Step Reasoning"☆56Updated 2 months ago
- ☆72Updated 4 months ago
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources☆197Updated 2 weeks ago
- SFT+RL boosts multimodal reasoning☆34Updated 3 months ago
- ☆92Updated 9 months ago
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆172Updated 6 months ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆126Updated 11 months ago
- [NeurIPS 2025] MINT-CoT: Enabling Interleaved Visual Tokens in Mathematical Chain-of-Thought Reasoning☆75Updated 3 weeks ago
- Code for paper: Reinforced Vision Perception with Tools☆52Updated last week
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scale☆116Updated last year
- Official code implementation of Slow Perception:Let's Perceive Geometric Figures Step-by-step☆131Updated 2 months ago
- ☆74Updated last year
- 从零到一实现了一个多模态大模型,并命名为Reyes(睿视),R:睿,eyes:眼。Reyes的参数量为8B,视觉编码器使用的是InternViT-300M-448px-V2_5,语言模型侧使用的是Qwen2.5-7B-Instruct,Reyes也通过一个两层MLP投影层连…☆26Updated 7 months ago
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆157Updated 2 weeks ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆57Updated 9 months ago
- ☆32Updated 2 weeks ago
- The first attempt to replicate o3-like visual clue-tracking reasoning capabilities.☆57Updated 3 months ago
- G1: Bootstrapping Perception and Reasoning Abilities of Vision-Language Model via Reinforcement Learning☆85Updated 4 months ago
- Official repository of MMDU dataset☆95Updated last year
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search too…☆326Updated last month
- GPG: A Simple and Strong Reinforcement Learning Baseline for Model Reasoning☆164Updated 4 months ago
- MM-PRM: Enhancing Multimodal Mathematical Reasoning with Scalable Step-Level Supervision☆25Updated 4 months ago