2U1 / Gemma3-FinetuneLinks
An open-source implementaion for Gemma3 series by Google.
☆53Updated 3 months ago
Alternatives and similar repositories for Gemma3-Finetune
Users that are interested in Gemma3-Finetune are comparing it to the libraries listed below
Sorting:
- An open-source implementaion for fine-tuning SmolVLM.☆50Updated last month
- An open-source implementaion for fine-tuning Llama3.2-Vision series by Meta.☆171Updated last month
- A minimal implementation of LLaVA-style VLM with interleaved image & text & video processing ability.☆96Updated 9 months ago
- ☆101Updated 4 months ago
- An open-source implementaion for fine-tuning Phi3-Vision and Phi3.5-Vision by Microsoft.☆98Updated last week
- [ICCV2023] TinyCLIP: CLIP Distillation via Affinity Mimicking and Weight Inheritance☆110Updated last year
- Reproduction of DeepSeek-R1☆238Updated 5 months ago
- An open-source implementaion for fine-tuning Molmo-7B-D and Molmo-7B-O by allenai.☆58Updated 5 months ago
- [ACL 2025 🔥] Rethinking Step-by-step Visual Reasoning in LLMs☆306Updated 4 months ago
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆340Updated 4 months ago
- ☆119Updated last year
- [ICCV 2025] OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision Encoders for Multimodal Learning☆395Updated 3 weeks ago
- a family of highly capabale yet efficient large multimodal models☆191Updated last year
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆162Updated 9 months ago
- [NeurIPS 2024] Official PyTorch implementation code for realizing the technical part of Mamba-based traversal of rationale (Meteor) to im…☆115Updated last year
- Florence-2 is a novel vision foundation model with a unified, prompt-based representation for a variety of computer vision and vision-lan…☆99Updated last year
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆166Updated last year
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Scheme☆143Updated 6 months ago
- Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models☆220Updated 3 weeks ago
- [EMNLP 2024] RWKV-CLIP: A Robust Vision-Language Representation Learner☆142Updated 4 months ago
- [CVPR 2025 Highlight] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for C…☆269Updated 8 months ago
- Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".☆272Updated 7 months ago
- A CPU Realtime VLM in 500M. Surpassed Moondream2 and SmolVLM. Training from scratch with ease.☆235Updated 5 months ago
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆155Updated last year
- mllm-npu: training multimodal large language models on Ascend NPUs☆92Updated last year
- ☆55Updated 10 months ago
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆272Updated last month
- Reproduction of LLaVA-v1.5 based on Llama-3-8b LLM backbone.☆65Updated 11 months ago
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆204Updated 8 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆211Updated 9 months ago