lwpyh / Awesome-MLLM-Reasoning-Collection
A collection of multimodal reasoning papers, codes, datasets, benchmarks and resources.
β192Updated this week
Alternatives and similar repositories for Awesome-MLLM-Reasoning-Collection:
Users that are interested in Awesome-MLLM-Reasoning-Collection are comparing it to the libraries listed below
- [NAACL 2025 Oral] π From redundancy to relevance: Enhancing explainability in multimodal large language modelsβ93Updated 2 months ago
- β¨β¨Long-VITA: Scaling Large Multi-modal Models to 1 Million Tokens with Leading Short-Context Accuracyβ275Updated last month
- (ECCV 2024) Empowering Multimodal Large Language Model as a Powerful Data Generatorβ107Updated last month
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Modelsβ174Updated 5 months ago
- [ICLR 2025] Mathematical Visual Instruction Tuning for Multi-modal Large Language Modelsβ137Updated 4 months ago
- A post-training method to enhance CLIP's fine-grained visual representations with generative models.β48Updated 3 weeks ago
- π [NeurIPS24] Make Vision Matter in Visual-Question-Answering (VQA)! Introducing NaturalBench, a vision-centric VQA benchmark (NeurIPS'2β¦β80Updated 2 weeks ago
- Chain-of-Spot: Interactive Reasoning Improves Large Vision-language Modelsβ93Updated last year
- An official implementation of VideoRoPE: What Makes for Good Video Rotary Position Embedding?β127Updated 2 weeks ago
- An open-source implementation for training LLaVA-NeXT.β392Updated 6 months ago
- Multi-granularity Correspondence Learning from Long-term Noisy Videos [ICLR 2024, Oral]β113Updated last year
- β126Updated 2 weeks ago
- GPT-ImgEval: Evaluating GPT-4oβs state-of-the-art image generation capabilitiesβ240Updated 2 weeks ago
- β103Updated 2 weeks ago
- (NeurIPS 2024) Official PyTorch implementation of LOVA3β82Updated last month
- The code for "TokenPacker: Efficient Visual Projector for Multimodal LLM".β247Updated 3 months ago
- [MM'24 Oral] Prior Knowledge Integration via LLM Encoding and Pseudo Event Regulation for Video Moment Retrievalβ125Updated 8 months ago
- u-LLaVA: Unifying Multi-Modal Tasks via Large Language Modelβ131Updated 2 weeks ago
- Linguistic-Aware Patch Slimming Framework for Fine-grained Cross-Modal Alignment, CVPR, 2024β89Updated this week
- The Next Step Forward in Multimodal LLM Alignmentβ145Updated last month
- [ECCV 2024] Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?β159Updated 7 months ago
- [CVPR 2025] The code for "VideoRefer Suite: Advancing Spatial-Temporal Object Understanding with Video LLM"β188Updated 3 weeks ago
- [ICLR'24] Democratizing Fine-grained Visual Recognition with Large Language Modelsβ176Updated 9 months ago
- [ECCV2024] Grounded Multimodal Large Language Model with Localized Visual Tokenizationβ562Updated 10 months ago
- π₯CVPR 2025 Multimodal Large Language Models Paper Listβ136Updated last month
- [ICLR 2025] MLLM for On-Demand Spatial-Temporal Understanding at Arbitrary Resolutionβ302Updated last month
- GPT4Vis: What Can GPT-4 Do for Zero-shot Visual Recognition?β186Updated 11 months ago
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'β154Updated this week
- [NeurIPS 2024] AWT: Transferring Vision-Language Models via Augmentation, Weighting, and Transportationβ97Updated 6 months ago
- Official implementation of X-Prompt: Towards Universal In-Context Image Generation in Auto-Regressive Vision Language Foundation Modelsβ153Updated 4 months ago