orrzohar / Video-STaRLinks
[ICLR 2025] Video-STaR: Self-Training Enables Video Instruction Tuning with Any Supervision
☆65Updated last year
Alternatives and similar repositories for Video-STaR
Users that are interested in Video-STaR are comparing it to the libraries listed below
Sorting:
- Language Repository for Long Video Understanding☆31Updated last year
- ☆45Updated 6 months ago
- [Arxiv] Aligning Modalities in Vision Large Language Models via Preference Fine-tuning☆86Updated last year
- ☆27Updated 8 months ago
- Enhancing Large Vision Language Models with Self-Training on Image Comprehension.☆69Updated last year
- [NeurIPS 2024] Calibrated Self-Rewarding Vision Language Models☆76Updated last year
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆130Updated last year
- [NeurIPS 2024] Official Repository of Multi-Object Hallucination in Vision-Language Models☆29Updated 8 months ago
- (ACL 2025) MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale☆46Updated last month
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆47Updated 4 months ago
- This repo contains the code for "MEGA-Bench Scaling Multimodal Evaluation to over 500 Real-World Tasks" [ICLR2025]☆71Updated 2 weeks ago
- Official implementation of "Automated Generation of Challenging Multiple-Choice Questions for Vision Language Model Evaluation" (CVPR 202…☆32Updated last month
- ☆87Updated 3 weeks ago
- Official repo of the ICLR 2025 paper "MMWorld: Towards Multi-discipline Multi-faceted World Model Evaluation in Videos"☆28Updated this week
- ACL'24 (Oral) Tuning Large Multimodal Models for Videos using Reinforcement Learning from AI Feedback☆67Updated 10 months ago
- Official implementation for "A Simple LLM Framework for Long-Range Video Question-Answering"☆97Updated 8 months ago
- [NeurIPS2023] Official implementation of the paper "Large Language Models are Visual Reasoning Coordinators"☆105Updated last year
- Code and datasets for "What’s “up” with vision-language models? Investigating their struggle with spatial reasoning".☆54Updated last year
- DeepPerception: Advancing R1-like Cognitive Visual Perception in MLLMs for Knowledge-Intensive Visual Grounding☆65Updated last month
- Multimodal RewardBench☆42Updated 4 months ago
- A Comprehensive Benchmark for Robust Multi-image Understanding☆12Updated 10 months ago
- ☆83Updated 6 months ago
- ZoomEye: Enhancing Multimodal LLMs with Human-Like Zooming Capabilities through Tree-Based Image Exploration☆46Updated 6 months ago
- OpenVLThinker: An Early Exploration to Vision-Language Reasoning via Iterative Self-Improvement☆93Updated last week
- [EMNLP'23] The official GitHub page for ''Evaluating Object Hallucination in Large Vision-Language Models''☆85Updated last year
- Official implement of MIA-DPO☆59Updated 5 months ago
- VideoHallucer, The first comprehensive benchmark for hallucination detection in large video-language models (LVLMs)☆35Updated 3 months ago
- [ICLR 2023] CoVLM: Composing Visual Entities and Relationships in Large Language Models Via Communicative Decoding☆45Updated last month
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning"☆124Updated last month
- ☆44Updated last year