2U1 / Llama3.2-Vision-Finetune
An open-source implementaion for fine-tuning Llama3.2-Vision series by Meta.
☆156Updated this week
Alternatives and similar repositories for Llama3.2-Vision-Finetune:
Users that are interested in Llama3.2-Vision-Finetune are comparing it to the libraries listed below
- ☆358Updated 3 months ago
- Rethinking Step-by-step Visual Reasoning in LLMs☆292Updated 3 months ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆293Updated 2 months ago
- An open-source implementaion for fine-tuning Phi3-Vision and Phi3.5-Vision by Microsoft.☆92Updated last week
- An open-source implementaion for fine-tuning Qwen2-VL and Qwen2.5-VL series by Alibaba Cloud.☆697Updated last week
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR25]☆211Updated last month
- Fine-tuning Qwen2.5-VL for vision-language tasks | Optimized for Vision understanding | LoRA & PEFT support.☆64Updated 3 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆201Updated 4 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆276Updated 8 months ago
- A Framework of Small-scale Large Multimodal Models☆812Updated 2 weeks ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆376Updated 3 weeks ago
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆147Updated 11 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆319Updated 9 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆358Updated 2 months ago
- A curated list of awesome Multimodal studies.☆189Updated last week
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allo…☆334Updated 8 months ago
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆237Updated 8 months ago
- Explore the Multimodal “Aha Moment” on 2B Model☆585Updated last month
- LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning☆133Updated 2 weeks ago
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models☆218Updated 7 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought …☆307Updated 4 months ago
- Official code for paper "UniIR: Training and Benchmarking Universal Multimodal Information Retrievers" (ECCV 2024)☆139Updated 7 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆508Updated last month
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆257Updated last year
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆261Updated 10 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoning☆158Updated last month
- Reproduction of DeepSeek-R1☆227Updated 3 weeks ago
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for E…☆423Updated 3 weeks ago
- InstructionGPT-4☆39Updated last year
- Long Context Transfer from Language to Vision☆374Updated last month