lwpyh / Awesome-MLLM-Reasoning-CollectionLinks
A collection of multimodal reasoning papers, codes, datasets, benchmarks and resources.
β223Updated this week
Alternatives and similar repositories for Awesome-MLLM-Reasoning-Collection
Users that are interested in Awesome-MLLM-Reasoning-Collection are comparing it to the libraries listed below
Sorting:
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generatorβ110Updated 2 months ago
- [NAACL 2025 Oral] π From redundancy to relevance: Enhancing explainability in multimodal large language modelsβ95Updated 3 months ago
- β¨β¨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracyβ285Updated 3 weeks ago
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Modelsβ142Updated 6 months ago
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Modelsβ93Updated last year
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025β249Updated last week
- An open-source implementation for training LLaVA-NeXT.β396Updated 7 months ago
- Collections of Papers and Projects for Multimodal Reasoning.β105Updated last month
- The Next Step Forward in Multimodal LLM Alignmentβ161Updated last month
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Modelsβ175Updated 7 months ago
- π [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2β¦β84Updated 2 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.β63Updated 2 months ago
- [ICML 2025] Official repository for paper "Scaling Video-Language Models to 10K Frames via Hierarchical Differential Distillation"β146Updated 3 weeks ago
- π₯CVPR 2025 Multimodal Large Language Models Paper Listβ143Updated 2 months ago
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenizationβ565Updated last year
- A collection of token reduction (token pruning, merging, clustering, etc.) techniques for ML/AIβ89Updated this week
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiencyβ108Updated last month
- GPT-ImgEval: Evaluating GPT-4oβs state-of-the-art image generation capabilitiesβ271Updated last month
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?β163Updated last month
- [ICML 2025 Spotlight] An official implementation of VideoRoPE: What Makes for Good Video Rotary Position Embedding?β146Updated last month
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'β172Updated last week
- GPT4Vis: What Can GPT-4 Do for Zero-shot Visual Recognition?β187Updated last year
- A post-training method to enhance CLIP's fine-grained visual representations with generative models.β50Updated 2 months ago
- Official Repository of OmniCaptionerβ144Updated last month
- [CVPR 2025] The code for "VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM"β206Updated 3 weeks ago
- β¨β¨ [ICLR 2025] MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans?β122Updated 3 months ago
- This is the official implementation of our paper "QuoTA: Query-oriented Token Assignment via CoT Query Decouple for Long Video Comprehensβ¦β70Updated last month
- β¨First Open-Source R1-like Video-LLM [2025/02/18]β345Updated 3 months ago
- Multi-granularity Correspondence Learning from Long-term Noisy Videos [ICLR 2024, Oral]β113Updated last year
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'β207Updated last month