zhyang2226 / OPA-DPOLinks
[CVPR 2025 (Oral)] Mitigating Hallucinations in Large Vision-Language Models via DPO: On-Policy Data Hold the Key
β67Updated 2 months ago
Alternatives and similar repositories for OPA-DPO
Users that are interested in OPA-DPO are comparing it to the libraries listed below
Sorting:
- Think or Not Think: A Study of Explicit Thinking in Rule-Based Visual Reinforcement Fine-Tuningβ56Updated 2 months ago
- π₯CVPR 2025 Multimodal Large Language Models Paper Listβ149Updated 4 months ago
- β134Updated 5 months ago
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Modelsβ71Updated last year
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought β¦β360Updated 7 months ago
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiencyβ124Updated 2 weeks ago
- [NeurIPS2024] Repo for the paper `ControlMLLM: Training-Free Visual Prompt Learning for Multimodal Large Language Models'β186Updated 3 weeks ago
- [ECCV 2024] API: Attention Prompting on Image for Large Vision-Language Modelsβ98Updated 10 months ago
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimizationβ91Updated last year
- R1-like Video-LLM for Temporal Groundingβ109Updated last month
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMsβ132Updated 9 months ago
- [ICLR'25] Reconstructive Visual Instruction Tuningβ101Updated 4 months ago
- Code for DeCo: Decoupling token compression from semanchc abstraction in multimodal large language modelsβ63Updated 3 weeks ago
- [CVPR' 25] Interleaved-Modal Chain-of-Thoughtβ70Updated 3 months ago
- β62Updated last week
- Official repository for CoMM Datasetβ45Updated 7 months ago
- β93Updated 4 months ago
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''β87Updated last year
- [ICLR'25] Official code for the paper 'MLLMs Know Where to Look: Training-free Perception of Small Visual Details with Multimodal LLMs'β240Updated 3 months ago
- Official repository of DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Modelsβ86Updated 11 months ago
- Reinforcement Learning Tuning for VideoLLMs: Reward Design and Data Efficiencyβ47Updated 2 months ago
- Collections of Papers and Projects for Multimodal Reasoning.β105Updated 3 months ago
- [CVPR 2025 Oral] VideoEspresso: A Large-Scale Chain-of-Thought Dataset for Fine-Grained Video Reasoning via Core Frame Selectionβ108Updated last week
- Latest open-source "Thinking with images" (O3/O4-mini) papers, covering training-free, SFT-based, and RL-enhanced methods for "fine-grainβ¦β81Updated last month
- β45Updated 7 months ago
- Official implement of MIA-DPOβ63Updated 6 months ago
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Modelsβ77Updated last year
- [CVPR 2025] Adaptive Keyframe Sampling for Long Video Understandingβ87Updated 3 months ago
- This repository will continuously update the latest papers, technical reports, benchmarks about multimodal reasoning!β47Updated 4 months ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language modelβ46Updated 8 months ago