Osilly / Awesome-Interleaving-ReasoningLinks
Interleaving Reasoning: Next-Generation Reasoning Systems for AGI
☆128Updated last month
Alternatives and similar repositories for Awesome-Interleaving-Reasoning
Users that are interested in Awesome-Interleaving-Reasoning are comparing it to the libraries listed below
Sorting:
- ☆104Updated last month
- A Comprehensive Survey on Evaluating Reasoning Capabilities in Multimodal Large Language Models.☆68Updated 5 months ago
- ☆79Updated last year
- ☆136Updated 6 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆85Updated last week
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆126Updated 2 weeks ago
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!☆48Updated 5 months ago
- A Self-Training Framework for Vision-Language Reasoning☆82Updated 7 months ago
- More Thinking, Less Seeing? Assessing Amplified Hallucination in Multimodal Reasoning Models☆45Updated 2 months ago
- A RLHF Infrastructure for Vision-Language Models☆181Updated 9 months ago
- MAT: Multi-modal Agent Tuning 🔥 ICLR 2025 (Spotlight)☆53Updated 2 months ago
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆145Updated last month
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆96Updated 8 months ago
- Official implementation of GUI-R1 : A Generalist R1-Style Vision-Language Action Model For GUI Agents☆165Updated 3 months ago
- Collections of Papers and Projects for Multimodal Reasoning.☆105Updated 3 months ago
- [CVPR' 25] Interleaved-Modal Chain-of-Thought☆75Updated last week
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆132Updated 4 months ago
- Data and Code for CVPR 2025 paper "MMVU: Measuring Expert-Level Multi-Discipline Video Understanding"☆70Updated 5 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆71Updated last year
- [CVPR 2024 Highlight] Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding☆303Updated 10 months ago
- Paper collections of multi-modal LLM for Math/STEM/Code.☆120Updated last week
- [ACM MM 2025] TimeChat-online: 80% Visual Tokens are Naturally Redundant in Streaming Videos☆71Updated last month
- ☆65Updated 3 weeks ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆139Updated 2 months ago
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆132Updated 9 months ago
- ☆26Updated 6 months ago
- 关于LLM和Multimodal LLM的paper list☆42Updated last week
- A curated collection of resources focused on the Mechanistic Interpretability (MI) of Large Multimodal Models (LMMs). This repository agg…☆119Updated 3 weeks ago
- An Arena-style Automated Evaluation Benchmark for Detailed Captioning☆54Updated 2 months ago
- VLM2-Bench [ACL 2025 Main]: A Closer Look at How Well VLMs Implicitly Link Explicit Matching Visual Cues☆41Updated 3 months ago