linjh1118 / Awesome-MLLM-For-GamesLinks
MLLM @ Game
☆14Updated 4 months ago
Alternatives and similar repositories for Awesome-MLLM-For-Games
Users that are interested in Awesome-MLLM-For-Games are comparing it to the libraries listed below
Sorting:
- ☆90Updated last year
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆141Updated 5 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆107Updated 3 months ago
- Multimodal Open-O1 (MO1) is designed to enhance the accuracy of inference models by utilizing a novel prompt-based approach. This tool wo…☆29Updated 11 months ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆126Updated 10 months ago
- The Next Step Forward in Multimodal LLM Alignment☆179Updated 4 months ago
- ☆72Updated 3 months ago
- A Self-Training Framework for Vision-Language Reasoning☆84Updated 7 months ago
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory☆183Updated 2 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆163Updated 6 months ago
- G1: Bootstrapping Perception and Reasoning Abilities of Vision-Language Model via Reinforcement Learning☆81Updated 3 months ago
- Pixel-Level Reasoning Model trained with RL☆204Updated last week
- SFT+RL boosts multimodal reasoning☆30Updated 2 months ago
- ☆25Updated last week
- The first attempt to replicate o3-like visual clue-tracking reasoning capabilities.☆57Updated 2 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆43Updated last year
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"☆169Updated 6 months ago
- [ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆90Updated last month
- Offical Repository of "AtomThink: Multimodal Slow Thinking with Atomic Step Reasoning"☆55Updated last month
- (ICLR'25) A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents☆83Updated 7 months ago
- ☆88Updated 8 months ago
- MULTI-Benchmark: Multimodal Understanding Leaderboard with Text and Images☆42Updated 3 months ago
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆152Updated last week
- "what, how, where, and how well? a survey on test-time scaling in large language models" repository☆67Updated this week
- An Arena-style Automated Evaluation Benchmark for Detailed Captioning☆55Updated 3 months ago
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search too…☆312Updated 3 weeks ago
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆78Updated last month
- 从零到一实现了一个多模态大模型,并命名为Reyes(睿视),R:睿,eyes:眼。Reyes的参数量为8B,视觉编码器使用的是InternViT-300M-448px-V2_5,语言模型侧使用的是Qwen2.5-7B-Instruct,Reyes也通过一个两层MLP投影层连…☆26Updated 7 months ago
- A Simple Framework of Small-scale LMMs for Video Understanding☆92Updated 3 months ago
- MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models☆41Updated 5 months ago