2U1 / Gemma3-FinetuneLinks
An open-source implementaion for Gemma3 series by Google.
☆62Updated 6 months ago
Alternatives and similar repositories for Gemma3-Finetune
Users that are interested in Gemma3-Finetune are comparing it to the libraries listed below
Sorting:
- An open-source implementaion for fine-tuning Llama3.2-Vision series by Meta.☆173Updated 2 months ago
- An open-source implementaion for fine-tuning Molmo-7B-D and Molmo-7B-O by allenai.☆61Updated 8 months ago
- An open-source implementaion for fine-tuning SmolVLM.☆60Updated 4 months ago
- [ICCV2023] TinyCLIP: CLIP Distillation via Affinity Mimicking and Weight Inheritance☆122Updated last year
- ☆105Updated 7 months ago
- An open-source implementaion for fine-tuning Phi3-Vision and Phi3.5-Vision by Microsoft.☆98Updated 3 months ago
- Fine-tuning Qwen2.5-VL for vision-language tasks | Optimized for Vision understanding | LoRA & PEFT support.☆145Updated 11 months ago
- [ICCV 2025] OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision Encoders for Multimodal Learning☆413Updated last month
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆97Updated last year
- Official code implementation of Slow Perception:Let's Perceive Geometric Figures Step-by-step☆158Updated 5 months ago
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆360Updated 7 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆273Updated last month
- [NeurIPS 2024] Official PyTorch implementation code for realizing the technical part of Mamba-based traversal of rationale (Meteor) to im…☆116Updated last year
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆162Updated last year
- Florence-2 is a novel vision foundation model with a unified, prompt-based representation for a variety of computer vision and vision-lan…☆138Updated last year
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆168Updated last year
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆164Updated last year
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆300Updated 4 months ago
- ☆385Updated 11 months ago
- The SAIL-VL2 series model developed by the BytedanceDouyinContent Group☆76Updated 3 months ago
- LLaVA-UHD v3: Progressive Visual Compression for Efficient Native-Resolution Encoding in MLLMs☆409Updated 3 weeks ago
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆69Updated 8 months ago
- [ACL 2025 🔥] Rethinking Step-by-step Visual Reasoning in LLMs☆310Updated 7 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆567Updated last month
- ☆67Updated 4 months ago
- ☆124Updated last year
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆212Updated last year
- [CVPR 2025 Highlight] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for C…☆275Updated 11 months ago
- A CPU Realtime VLM in 500M. Surpassed Moondream2 and SmolVLM. Training from scratch with ease.☆243Updated 8 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆146Updated 9 months ago