2U1 / Gemma3-FinetuneLinks
An open-source implementaion for Gemma3 series by Google.
☆55Updated 4 months ago
Alternatives and similar repositories for Gemma3-Finetune
Users that are interested in Gemma3-Finetune are comparing it to the libraries listed below
Sorting:
- An open-source implementaion for fine-tuning SmolVLM.☆52Updated last month
- An open-source implementaion for fine-tuning Llama3.2-Vision series by Meta.☆171Updated last week
- Florence-2 is a novel vision foundation model with a unified, prompt-based representation for a variety of computer vision and vision-lan…☆106Updated last year
- An open-source implementaion for fine-tuning Phi3-Vision and Phi3.5-Vision by Microsoft.☆99Updated last month
- [ICCV2023] TinyCLIP: CLIP Distillation via Affinity Mimicking and Weight Inheritance☆114Updated last year
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆341Updated 4 months ago
- An open-source implementaion for fine-tuning Molmo-7B-D and Molmo-7B-O by allenai.☆58Updated 6 months ago
- [ICCV 2025] OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision Encoders for Multimodal Learning☆400Updated last month
- ☆101Updated 4 months ago
- Reproduction of LLaVA-v1.5 based on Llama-3-8b LLM backbone.☆65Updated last year
- [ACL 2025 🔥] Rethinking Step-by-step Visual Reasoning in LLMs☆307Updated 5 months ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆126Updated 11 months ago
- ☆122Updated last year
- [NeurIPS 2024] Official PyTorch implementation code for realizing the technical part of Mamba-based traversal of rationale (Meteor) to im…☆115Updated last year
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆389Updated this week
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆277Updated 2 months ago
- A CPU Realtime VLM in 500M. Surpassed Moondream2 and SmolVLM. Training from scratch with ease.☆238Updated 6 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆557Updated 4 months ago
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆96Updated 10 months ago
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆157Updated last year
- Official implementation of our paper "Finetuned Multimodal Language Models are High-Quality Image-Text Data Filters".☆67Updated 6 months ago
- [TMLR] Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆148Updated 11 months ago
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆143Updated 5 months ago
- Fine-tuning Qwen2.5-VL for vision-language tasks | Optimized for Vision understanding | LoRA & PEFT support.☆137Updated 8 months ago
- a family of highly capabale yet efficient large multimodal models☆191Updated last year
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆167Updated last year
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆162Updated 10 months ago
- [EMNLP 2024] Official PyTorch implementation code for realizing the technical part of Traversal of Layers (TroL) presenting new propagati…☆98Updated last year
- [CVPR 2025 Highlight] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for C…☆269Updated 9 months ago
- [ACM MM25] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆94Updated 2 months ago