OpenGVLab / V2PELinks
[ArXiv] V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding
☆57Updated last year
Alternatives and similar repositories for V2PE
Users that are interested in V2PE are comparing it to the libraries listed below
Sorting:
- G1: Bootstrapping Perception and Reasoning Abilities of Vision-Language Model via Reinforcement Learning☆94Updated 7 months ago
- ☆109Updated last year
- [NeurIPS 2025] NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation☆102Updated 3 months ago
- The official code of "VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning" [NeurIPS25]☆176Updated 7 months ago
- Official code of *Virgo: A Preliminary Exploration on Reproducing o1-like MLLM*☆109Updated 7 months ago
- The codebase for our EMNLP24 paper: Multimodal Self-Instruct: Synthetic Abstract Image and Visual Reasoning Instruction Using Language Mo…☆85Updated 11 months ago
- (ACL 2025) MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale☆49Updated 7 months ago
- Official repository of MMDU dataset☆102Updated last year
- ☆101Updated 2 years ago
- Official repo for "PAPO: Perception-Aware Policy Optimization for Multimodal Reasoning"☆110Updated last month
- ☆46Updated last year
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆120Updated last year
- [SCIS 2024] The official implementation of the paper "MMInstruct: A High-Quality Multi-Modal Instruction Tuning Dataset with Extensive Di…☆62Updated last year
- VideoNIAH: A Flexible Synthetic Method for Benchmarking Video MLLMs☆54Updated 10 months ago
- This repo contains evaluation code for the paper "MileBench: Benchmarking MLLMs in Long Context"☆35Updated last year
- A Self-Training Framework for Vision-Language Reasoning☆88Updated 11 months ago
- Official implementation of "Traceable Evidence Enhanced Visual Grounded Reasoning: Evaluation and Methodology"☆72Updated 2 months ago
- Multimodal RewardBench☆58Updated 10 months ago
- [ACL 2024] Multi-modal preference alignment remedies regression of visual instruction tuning on language model☆48Updated last year
- [AAAI 2025] HiRED strategically drops visual tokens in the image encoding stage to improve inference efficiency for High-Resolution Visio…☆44Updated 8 months ago
- SophiaVL-R1: Reinforcing MLLMs Reasoning with Thinking Reward☆88Updated 5 months ago
- Preference Learning for LLaVA☆58Updated last year
- [ICLR2025] MMIU: Multimodal Multi-image Understanding for Evaluating Large Vision-Language Models☆93Updated last year
- code for "Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization"☆59Updated last year
- [ICCV 2025] Dynamic-VLM☆28Updated last year
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆92Updated last year
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆147Updated last year
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆202Updated last year
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆103Updated last year
- ☆66Updated last year