2U1 / Gemma3-FinetuneLinks
An open-source implementaion for Gemma3 series by Google.
☆48Updated 2 months ago
Alternatives and similar repositories for Gemma3-Finetune
Users that are interested in Gemma3-Finetune are comparing it to the libraries listed below
Sorting:
- An open-source implementaion for fine-tuning SmolVLM.☆46Updated last week
- An open-source implementaion for fine-tuning Llama3.2-Vision series by Meta.☆170Updated last week
- An open-source implementaion for fine-tuning Phi3-Vision and Phi3.5-Vision by Microsoft.☆96Updated 4 months ago
- Fine-tuning Qwen2.5-VL for vision-language tasks | Optimized for Vision understanding | LoRA & PEFT support.☆123Updated 7 months ago
- An open-source implementaion for fine-tuning Molmo-7B-D and Molmo-7B-O by allenai.☆58Updated 4 months ago
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆153Updated last year
- [ICCV 2025] OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision Encoders for Multimodal Learning☆378Updated last week
- ☆95Updated 3 months ago
- Florence-2 is a novel vision foundation model with a unified, prompt-based representation for a variety of computer vision and vision-lan…☆91Updated last year
- [ICCV2023] TinyCLIP: CLIP Distillation via Affinity Mimicking and Weight Inheritance☆106Updated last year
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆166Updated 11 months ago
- [ECCV 2024] Official PyTorch implementation code for realizing the technical part of Mixture of All Intelligence (MoAI) to improve perfor…☆326Updated last year
- A Simple Framework of Small-scale LMMs for Video Understanding☆92Updated 3 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆546Updated 2 months ago
- [NeurIPS 2024] Official PyTorch implementation code for realizing the technical part of Mamba-based traversal of rationale (Meteor) to im…☆115Updated last year
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆240Updated last year
- [ACL 2025 🔥] Rethinking Step-by-step Visual Reasoning in LLMs☆305Updated 4 months ago
- Reproduction of LLaVA-v1.5 based on Llama-3-8b LLM backbone.☆65Updated 10 months ago
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆338Updated 3 months ago
- [ICCVW 25] LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuning☆150Updated last month
- A CPU Realtime VLM in 500M. Surpassed Moondream2 and SmolVLM. Training from scratch with ease.☆234Updated 4 months ago
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆265Updated 3 weeks ago
- [ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆90Updated last month
- ☆179Updated 7 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆96Updated 9 months ago
- a family of highly capabale yet efficient large multimodal models☆191Updated last year
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆126Updated 10 months ago
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆142Updated 4 months ago
- LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning☆65Updated 3 months ago
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆76Updated 4 months ago