2U1 / Qwen2-VL-Finetune
An open-source implementaion for fine-tuning Qwen2-VL series by Alibaba Cloud.
☆173Updated this week
Alternatives and similar repositories for Qwen2-VL-Finetune:
Users that are interested in Qwen2-VL-Finetune are comparing it to the libraries listed below
- ☆291Updated this week
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Feature Pyramid via Hierarchical Window Transformer☆354Updated 2 weeks ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆252Updated 7 months ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆347Updated 3 weeks ago
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR25]☆119Updated this week
- LLaVA-HR: High-Resolution Large Language-Vision Assistant☆223Updated 5 months ago
- OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆305Updated 2 months ago
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆143Updated this week
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆259Updated 4 months ago
- Dataset and Code for our ACL 2024 paper: "Multimodal Table Understanding". We propose the first large-scale Multimodal IFT and Pre-Train …☆183Updated 4 months ago
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆56Updated 2 months ago
- Official repository for paper MG-LLaVA: Towards Multi-Granularity Visual Instruction Tuning(https://arxiv.org/abs/2406.17770).☆152Updated 4 months ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆227Updated this week
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆308Updated 6 months ago
- RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthiness☆283Updated last month
- a family of highly capabale yet efficient large multimodal models☆176Updated 5 months ago
- Vary-tiny codebase upon LAVIS (for training from scratch)and a PDF image-text pairs data (about 600k including English/Chinese)☆76Updated 4 months ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆261Updated 7 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability of…☆109Updated 2 months ago
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆80Updated this week
- My implementation of "Patch n’ Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution"☆210Updated this week
- Efficient Multimodal Large Language Models: A Survey☆307Updated 5 months ago
- An open-source implementaion for fine-tuning Llama3.2-Vision series by Meta.☆122Updated this week
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" (TMLR2024)☆195Updated this week
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆144Updated 4 months ago
- InstructionGPT-4☆38Updated last year
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆201Updated 11 months ago
- [ICLR 2025] LLaVA-MoD: Making LLaVA Tiny via MoE-Knowledge Distillation☆81Updated last week
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆113Updated 8 months ago
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆258Updated this week