lwpyh / Awesome-MLLM-Reasoning-CollectionLinks
A collection of multimodal reasoning papers, codes, datasets, benchmarks and resources.
β279Updated 3 weeks ago
Alternatives and similar repositories for Awesome-MLLM-Reasoning-Collection
Users that are interested in Awesome-MLLM-Reasoning-Collection are comparing it to the libraries listed below
Sorting:
- [NAACL 2025 Oral] π From redundancy to relevance: Enhancing explainability in multimodal large language modelsβ107Updated 5 months ago
- A curated collection of resources, tools, and frameworks for developing GUI Agents.β108Updated this week
- An open-source implementation for training LLaVA-NeXT.β413Updated 9 months ago
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generatorβ112Updated 4 months ago
- A collection of token reduction (token pruning, merging, clustering, etc.) techniques for ML/AIβ116Updated 2 weeks ago
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM", IJCV2025β263Updated 2 months ago
- (ICCV 2025) Enhance CLIP and MLLM's fine-grained visual representations with generative models.β68Updated last month
- β¨β¨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracyβ292Updated 2 months ago
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Modelsβ148Updated 8 months ago
- [ICML 2025] Official repository for paper "Scaling Video-Language Models to 10K Frames via Hierarchical Differential Distillation"β165Updated 2 months ago
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Modelsβ97Updated last year
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenizationβ577Updated last year
- Efficient Reasoning Vision Language Modelsβ337Updated 2 weeks ago
- β¨β¨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learningβ246Updated 2 months ago
- π [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2β¦β84Updated last month
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Modelsβ177Updated 9 months ago
- [CVPR 2025] The code for "VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM"β250Updated last month
- The Next Step Forward in Multimodal LLM Alignmentβ170Updated 3 months ago
- Collections of Papers and Projects for Multimodal Reasoning.β105Updated 3 months ago
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?β167Updated 3 months ago
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.β68Updated 4 months ago
- [MM'24 Oral] Prior Knowledge Integration via LLM Encoding and Pseudo Event Regulation for Video Moment Retrievalβ127Updated 11 months ago
- [ACL 2023 Findings] FACTUAL dataset, the textual scene graph parser trained on FACTUAL.β113Updated last month
- Multi-granularity Correspondence Learning from Long-term Noisy Videos [ICLR 2024, Oral]β116Updated last year
- u-LLaVA: Unifying Multi-Modal Tasks via Large Language Modelβ134Updated 3 months ago
- [ICML 2025 Oral] An official implementation of VideoRoPE & VideoRoPE++β176Updated last week
- Linguistic-Aware Patch Slimming Framework for Fine-grained Cross-Modal Alignment, CVPR, 2024β97Updated last month
- π₯CVPR 2025 Multimodal Large Language Models Paper Listβ149Updated 4 months ago
- GPT-ImgEval: Evaluating GPT-4oβs state-of-the-art image generation capabilitiesβ286Updated 3 months ago
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolutionβ318Updated last month