lwpyh / Awesome-MLLM-Reasoning-CollectionLinks
A collection of multimodal reasoning papers, codes, datasets, benchmarks and resources.
☆309Updated last month
Alternatives and similar repositories for Awesome-MLLM-Reasoning-Collection
Users that are interested in Awesome-MLLM-Reasoning-Collection are comparing it to the libraries listed below
Sorting:
- A curated collection of resources, tools, and frameworks for developing GUI Agents.☆152Updated last week
- A collection of token reduction (token pruning, merging, clustering, etc.) techniques for ML/AI☆173Updated last month
- An open-source implementation for training LLaVA-NeXT.☆422Updated 11 months ago
- [NAACL 2025 Oral] 🎉 From redundancy to relevance: Enhancing explainability in multimodal large language models☆121Updated 7 months ago
- (ICCV 2025) Enhance CLIP and MLLM's fine-grained visual representations with generative models.☆73Updated 3 months ago
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025☆269Updated 4 months ago
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Models☆151Updated 10 months ago
- ✨✨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracy☆294Updated 4 months ago
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generator☆114Updated 6 months ago
- [NeurIPS 2025] Efficient Reasoning Vision Language Models☆401Updated 3 weeks ago
- ✨✨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning☆262Updated 5 months ago
- [ICML 2025] Official repository for paper "Scaling Video-Language Models to 10K Frames via Hierarchical Differential Distillation"☆177Updated 2 weeks ago
- Collections of Papers and Projects for Multimodal Reasoning.☆105Updated 5 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆68Updated 6 months ago
- The Next Step Forward in Multimodal LLM Alignment☆181Updated 5 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'☆274Updated 5 months ago
- 🔥CVPR 2025 Multimodal Large Language Models Paper List☆155Updated 6 months ago
- Interleaving Reasoning: Next-Generation Reasoning Systems for AGI☆182Updated last month
- ☆108Updated last month
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Models☆180Updated 11 months ago
- A Gaussian dense reward framework for GUI grounding training☆227Updated last month
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!☆52Updated 6 months ago
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenization☆577Updated last year
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Models☆96Updated last year
- [NeurIPS 2025] Official implementation for the paper "SeePhys: Does Seeing Help Thinking? -- Benchmarking Vision-Based Physics Reasoning"☆44Updated 3 weeks ago
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?☆170Updated 5 months ago
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆130Updated 2 months ago
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆97Updated 10 months ago
- [ICML 2025 Oral] An official implementation of VideoRoPE & VideoRoPE++☆197Updated 2 months ago
- 🚀 [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2…☆87Updated 3 months ago