zhangfaen / finetune-Qwen2-VL
☆201Updated 3 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for finetune-Qwen2-VL
- LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images☆318Updated last month
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆179Updated 3 weeks ago
- official code for "Fox: Focus Anywhere for Fine-grained Multi-page Document Understanding"☆128Updated 5 months ago
- An open-source implementaion for fine-tuning Qwen2-VL series by Alibaba Cloud.☆113Updated 2 weeks ago
- OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆274Updated this week
- Document Artifical Intelligence☆130Updated last month
- ☆126Updated 9 months ago
- a family of highly capabale yet efficient large multimodal models☆166Updated 2 months ago
- LLaVA-HR: High-Resolution Large Language-Vision Assistant☆212Updated 3 months ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆258Updated 5 months ago
- Vary-tiny codebase upon LAVIS (for training from scratch)and a PDF image-text pairs data (about 600k including English/Chinese)☆68Updated 2 months ago
- RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthiness☆241Updated 2 weeks ago
- Quick exploration into fine tuning florence 2☆271Updated 2 months ago
- Dataset and Code for our ACL 2024 paper: "Multimodal Table Understanding". We propose the first large-scale Multimodal IFT and Pre-Train …☆164Updated last month
- E5-V: Universal Embeddings with Multimodal Large Language Models☆173Updated 4 months ago
- 🔥🔥First-ever hour scale video understanding models☆166Updated 3 weeks ago
- This repo contains the code and data for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks"☆69Updated last week
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆234Updated 2 weeks ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" (TMLR2024)☆184Updated this week
- ☆278Updated 2 weeks ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆246Updated 4 months ago
- [ACM'MM 2024 Oral] Official code for "OneChart: Purify the Chart Structural Extraction via One Auxiliary Token"☆197Updated last month
- Long Context Transfer from Language to Vision☆334Updated 3 weeks ago
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆142Updated last week
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models☆173Updated 2 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆242Updated this week
- ☆152Updated 4 months ago
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)☆144Updated 3 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆235Updated 2 months ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆274Updated 3 months ago