2U1 / Qwen2-VL-Finetune
An open-source implementaion for fine-tuning Qwen2-VL and Qwen2.5-VL series by Alibaba Cloud.
☆648Updated this week
Alternatives and similar repositories for Qwen2-VL-Finetune:
Users that are interested in Qwen2-VL-Finetune are comparing it to the libraries listed below
- ☆354Updated 2 months ago
- A fork to add multimodal model training to open-r1☆1,227Updated 2 months ago
- A Framework of Small-scale Large Multimodal Models☆800Updated last month
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆2,264Updated this week
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆290Updated 2 months ago
- This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-sta…☆527Updated last week
- MM-EUREKA: Exploring Visual Aha Moment with Rule-based Large-scale Reinforcement Learning☆574Updated this week
- Official repository of ’Visual-RFT: Visual Reinforcement Fine-Tuning’☆1,606Updated last week
- A jounery to real multimodel R1 ! We are doing on large-scale experiment☆295Updated last month
- Explore the Multimodal “Aha Moment” on 2B Model☆577Updated last month
- Multimodal Chain-of-Thought Reasoning: A Comprehensive Survey☆507Updated this week
- This repository provides valuable reference for researchers in the field of multimodality, please start your exploratory travel in RL-bas…☆663Updated this week
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆728Updated this week
- R1-onevision, a visual language model capable of deep CoT reasoning.☆506Updated last week
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆298Updated 4 months ago
- EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework based on veRL☆2,138Updated this week
- Project Page For "Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement"☆325Updated 2 weeks ago
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆325Updated 2 months ago
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆655Updated 2 weeks ago
- Reading notes about Multimodal Large Language Models, Large Language Models, and Diffusion Models☆382Updated 2 weeks ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆374Updated this week
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆506Updated last month
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆417Updated 3 months ago
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆526Updated 6 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆469Updated this week
- Famous Vision Language Models and Their Architectures☆789Updated 2 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆355Updated last month
- Next-Token Prediction is All You Need☆2,099Updated last month
- VisionLLM Series☆1,050Updated last month
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆340Updated last month