2U1 / Qwen2-VL-Finetune
An open-source implementaion for fine-tuning Qwen2-VL and Qwen2.5-VL series by Alibaba Cloud.
☆378Updated this week
Alternatives and similar repositories for Qwen2-VL-Finetune:
Users that are interested in Qwen2-VL-Finetune are comparing it to the libraries listed below
- ☆328Updated 3 weeks ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Feature Pyramid via Hierarchical Window Transformer☆367Updated last month
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆372Updated 2 months ago
- A Framework of Small-scale Large Multimodal Models☆758Updated last month
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆476Updated last month
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆262Updated last week
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆314Updated 3 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆315Updated 7 months ago
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR25]☆152Updated this week
- 📖 This is a repository for organizing papers, codes and other resources related to unified multimodal models.☆388Updated last month
- An open-source implementaion for fine-tuning Llama3.2-Vision series by Meta.☆136Updated last week
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆249Updated 2 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆256Updated 8 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆266Updated 5 months ago
- [CVPR'25] RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthiness☆303Updated this week
- An open-source implementation for training LLaVA-NeXT.☆383Updated 4 months ago
- Code for the Molmo Vision-Language Model☆309Updated 2 months ago
- Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks☆1,956Updated this week
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆376Updated last month
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆230Updated 6 months ago
- Famous Vision Language Models and Their Architectures☆680Updated last week
- Quick exploration into fine tuning florence 2☆302Updated 5 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆152Updated 5 months ago
- A paper list of some recent works about Token Compress for Vit and VLM☆345Updated this week