TideDra / VL-RLHFLinks
A RLHF Infrastructure for Vision-Language Models
☆186Updated 11 months ago
Alternatives and similar repositories for VL-RLHF
Users that are interested in VL-RLHF are comparing it to the libraries listed below
Sorting:
- Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization☆97Updated last year
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆295Updated last year
- ☆84Updated last year
- ☆100Updated last year
- [NeurIPS 2025] NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆95Updated last month
- A Self-Training Framework for Vision-Language Reasoning☆86Updated 9 months ago
- Paper collections of multi-modal LLM for Math/STEM/Code.☆129Updated 2 weeks ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆164Updated 5 months ago
- ☆98Updated 10 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 5 months ago
- [TMLR 25] SFT or RL? An Early Investigation into Training R1-Like Reasoning Large Vision-Language Models☆139Updated last month
- This repository contains the code for SFT, RLHF, and DPO, designed for vision-based LLMs, including the LLaVA models and the LLaMA-3.2-vi…☆116Updated 4 months ago
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.☆146Updated last month
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆198Updated last year
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆117Updated 11 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆83Updated last year
- The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆226Updated 2 months ago
- An LLM-free Multi-dimensional Benchmark for Multi-modal Hallucination Evaluation☆142Updated last year
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆91Updated last year
- [ECCV 2024] Paying More Attention to Image: A Training-Free Method for Alleviating Hallucination in LVLMs☆151Updated last year
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources☆208Updated last month
- Official repository of MMDU dataset☆97Updated last year
- [ICLR'24] Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning☆290Updated last year
- [ICML 2024] | MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI☆115Updated last year
- ☆155Updated last year
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆276Updated last year
- MME-CoT: Benchmarking Chain-of-Thought in LMMs for Reasoning Quality, Robustness, and Efficiency☆133Updated 3 months ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆307Updated last month
- VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models☆75Updated last year
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆95Updated 2 months ago