Quinn777 / AtomThinkLinks
Can Atomic Step Decomposition Enhance the Self-structured Reasoning of Multimodal Large Models?
☆23Updated 2 months ago
Alternatives and similar repositories for AtomThink
Users that are interested in AtomThink are comparing it to the libraries listed below
Sorting:
- ☆74Updated 11 months ago
- NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆64Updated 2 weeks ago
- A Self-Training Framework for Vision-Language Reasoning☆80Updated 4 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆103Updated last week
- ☆42Updated 2 months ago
- ☆77Updated 4 months ago
- SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆112Updated last month
- An Easy-to-use Hallucination Detection Framework for LLMs.☆59Updated last year
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆47Updated 5 months ago
- ☆54Updated 2 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆78Updated 4 months ago
- ☆45Updated last month
- [ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation☆82Updated 5 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆74Updated 6 months ago
- Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆36Updated last week
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆51Updated last week
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆103Updated last week
- ☆60Updated 2 weeks ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆69Updated 3 months ago
- An benchmark for evaluating the capabilities of large vision-language models (LVLMs)☆45Updated last year
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆22Updated 3 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆74Updated 11 months ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆46Updated 6 months ago
- Preference Learning for LLaVA☆45Updated 6 months ago
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆54Updated 2 weeks ago
- [ICML 2025] Official implementation of paper 'Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in…☆120Updated last week
- Less is More: Mitigating Multimodal Hallucination from an EOS Decision Perspective (ACL 2024)☆50Updated 7 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆67Updated last year
- [arxiv: 2505.02156] Adaptive Thinking via Mode Policy Optimization for Social Language Agents☆27Updated 2 weeks ago
- ☆99Updated last year