2U1 / Llama3.2-Vision-Finetune
An open-source implementaion for fine-tuning Llama3.2-Vision series by Meta.
☆84Updated 2 weeks ago
Related projects ⓘ
Alternatives and complementary repositories for Llama3.2-Vision-Finetune
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆181Updated last month
- An open-source implementaion for fine-tuning Qwen2-VL series by Alibaba Cloud.☆117Updated 2 weeks ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆179Updated last month
- LLaVA-HR: High-Resolution Large Language-Vision Assistant☆212Updated 3 months ago
- Offical Code for GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation☆134Updated 3 weeks ago
- An open-source implementaion for fine-tuning Phi3-Vision and Phi3.5-Vision by Microsoft.☆72Updated 2 weeks ago
- [NeurIPS 2024] Official PyTorch implementation code for realizing the technical part of Mamba-based traversal of rationale (Meteor) to im…☆102Updated 5 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" (TMLR2024)☆184Updated this week
- LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images☆318Updated last month
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆115Updated last week
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆275Updated this week
- This repo contains the code and data for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks"☆74Updated last week
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆132Updated last month
- Reproduction of LLaVA-v1.5 based on Llama-3-8b LLM backbone.☆59Updated 3 weeks ago
- ☆74Updated 8 months ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models