lwpyh / Awesome-MLLM-Reasoning-CollectionLinks
A collection of multimodal reasoning papers, codes, datasets, benchmarks and resources.
β323Updated this week
Alternatives and similar repositories for Awesome-MLLM-Reasoning-Collection
Users that are interested in Awesome-MLLM-Reasoning-Collection are comparing it to the libraries listed below
Sorting:
- A collection of token reduction (token pruning, merging, clustering, etc.) techniques for ML/AIβ190Updated 2 months ago
- [NAACL 2025 Oral] π From redundancy to relevance: Enhancing explainability in multimodal large language modelsβ120Updated 8 months ago
- An open-source implementation for training LLaVA-NeXT.β423Updated last year
- A curated collection of resources, tools, and frameworks for developing GUI Agents.β162Updated last week
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025β271Updated 5 months ago
- Collections of Papers and Projects for Multimodal Reasoning.β105Updated 6 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.β69Updated 7 months ago
- β¨β¨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracyβ302Updated 5 months ago
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Modelsβ152Updated 10 months ago
- (ICCV 2025) Enhance CLIP and MLLM's fine-grained visual representations with generative models.β73Updated 4 months ago
- π₯CVPR 2025 Multimodal Large Language Models Paper Listβ156Updated 7 months ago
- The Next Step Forward in Multimodal LLM Alignmentβ184Updated 6 months ago
- [NeurIPS 2025] Efficient Reasoning Vision Language Modelsβ407Updated last month
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generatorβ114Updated 7 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'β284Updated 6 months ago
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!β54Updated 7 months ago
- β109Updated last month
- β¨β¨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learningβ265Updated 5 months ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'β195Updated 3 months ago
- Latest Advances on (RL based) Multimodal Reasoning and Generation in Multimodal Large Language Modelsβ41Updated this week
- This repository is the official implementation of "Look-Back: Implicit Visual Re-focusing in MLLM Reasoning".β66Updated 3 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMsβ147Updated 11 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought β¦β389Updated 10 months ago
- [ICML 2025] Official repository for paper "Scaling Video-Language Models to 10K Frames via Hierarchical Differential Distillation"β177Updated last month
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Modelsβ98Updated last year
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β369Updated 8 months ago
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation inβ¦β163Updated last month
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiencyβ132Updated 2 months ago
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Modelsβ180Updated 11 months ago
- Official repository for VisionZip (CVPR 2025)β366Updated 3 months ago