HongbangYuan / OmniRewardLinks
☆39Updated last month
Alternatives and similar repositories for OmniReward
Users that are interested in OmniReward are comparing it to the libraries listed below
Sorting:
- Official repository of the video reasoning benchmark MMR-V. Can Your MLLMs "Think with Video"?☆37Updated 7 months ago
- Official implement of MIA-DPO☆70Updated last year
- V1: Toward Multimodal Reasoning by Designing Auxiliary Task☆36Updated 9 months ago
- A unified framework for controllable caption generation across images, videos, and audio. Supports multi-modal inputs and customizable ca…☆52Updated 6 months ago
- (ICLR 2025 Spotlight) Official code repository for Interleaved Scene Graph.☆31Updated 5 months ago
- Official Repository of Personalized Visual Instruct Tuning☆33Updated 10 months ago
- Assessing Context-Aware Creative Intelligence in MLLMs☆23Updated 6 months ago
- The code for "VISTA: Enhancing Long-Duration and High-Resolution Video Understanding by VIdeo SpatioTemporal Augmentation" [CVPR2025]☆20Updated 11 months ago
- Multimodal RewardBench☆60Updated 11 months ago
- Co-Reinforcement Learning for Unified Multimodal Understanding and Generation☆37Updated 6 months ago
- ☆36Updated last year
- [NeurIPS 2025] Think or Not? Selective Reasoning via Reinforcement Learning for Vision-Language Models☆53Updated 3 months ago
- Official repository of 'ScaleCap: Inference-Time Scalable Image Captioning via Dual-Modality Debiasing’☆58Updated 7 months ago
- ☆80Updated 7 months ago
- official code for "BoostStep: Boosting mathematical capability of Large Language Models via improved single-step reasoning"☆37Updated last year
- Quick Long Video Understanding [TMLR2025]☆74Updated 3 months ago
- [ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding☆59Updated last year
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆60Updated last year
- Code for Commonsense-T2I Challenge: Can Text-to-Image Generation Models Understand Commonsense? [COLM 2024]☆24Updated last year
- ☆18Updated last year
- Github repository for "Bring Reason to Vision: Understanding Perception and Reasoning through Model Merging" (ICML 2025)☆88Updated 4 months ago
- ☆27Updated 6 months ago
- Code and data for paper "Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and Mitigation".☆24Updated 3 months ago
- Implementation and dataset for paper "Can MLLMs Perform Text-to-Image In-Context Learning?"☆42Updated 7 months ago
- Code for "CAFe: Unifying Representation and Generation with Contrastive-Autoregressive Finetuning"☆32Updated 10 months ago
- [NeurIPS 2025] VeriThinker: Learning to Verify Makes Reasoning Model Efficient☆64Updated 4 months ago
- ✨✨The Curse of Multi-Modalities (CMM): Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio☆53Updated 6 months ago
- [NeurIPS 2025] NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆104Updated 4 months ago
- Official Code for "ARM-Thinker: Reinforcing Multimodal Generative Reward Models with Agentic Tool Use and Visual Reasoning"☆79Updated last month
- G1: Bootstrapping Perception and Reasoning Abilities of Vision-Language Model via Reinforcement Learning☆95Updated 8 months ago