Quick exploration into fine tuning florence 2
β338Sep 19, 2024Updated last year
Alternatives and similar repositories for florence2-finetuning
Users that are interested in florence2-finetuning are comparing it to the libraries listed below
Sorting:
- finetune your florence2 model easyβ19Jul 8, 2024Updated last year
- Recipes for shrinking, optimizing, customizing cutting edge vision models. πβ1,886Jan 9, 2026Updated 2 months ago
- Notebooks for fine tuning pali gemmaβ117Apr 15, 2025Updated 10 months ago
- Use Florence 2 to auto-label data for use in training fine-tuned object detection models.β69Aug 15, 2024Updated last year
- Florence-2 is a novel vision foundation model with a unified, prompt-based representation for a variety of computer vision and vision-lanβ¦β155Jul 3, 2024Updated last year
- utilities for loading and running text embeddings with onnxβ45Aug 16, 2025Updated 6 months ago
- A family of lightweight multimodal models.β1,051Nov 18, 2024Updated last year
- Real-time object detection using Florence-2 with a user-friendly GUI.β31Aug 7, 2025Updated 7 months ago
- β96Sep 19, 2024Updated last year
- finetune your florence2 model easyβ21Jul 27, 2024Updated last year
- Load any clip model with a standardized interfaceβ22Oct 20, 2025Updated 4 months ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.β1,988Nov 7, 2025Updated 4 months ago
- A Dead Simple and Modularized Multi-Modal Training and Finetune Framework. Compatible to any LLaVA/Flamingo/QwenVL/MiniGemini etc series β¦β19Apr 24, 2024Updated last year
- Non-local Modeling for Image Quality Assessmentβ13Dec 20, 2023Updated 2 years ago
- [ICCVW 25] LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuningβ160Aug 8, 2025Updated 7 months ago
- A list of language models with permissive licenses such as MIT or Apache 2.0β24Feb 28, 2025Updated last year
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024 Best Paper]β240Jan 3, 2026Updated 2 months ago
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.β39Jun 20, 2024Updated last year
- A CPU Realtime VLM in 500M. Surpassed Moondream2 and SmolVLM. Training from scratch with ease.β252Apr 22, 2025Updated 10 months ago
- β29Aug 19, 2024Updated last year
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. ζ₯θΏGPT-4o葨η°ηεΌζΊε€ζ¨‘ζε―Ήθ―樑εβ9,854Sep 22, 2025Updated 5 months ago
- β194Jun 3, 2025Updated 9 months ago
- A Framework of Small-scale Large Multimodal Modelsβ963Feb 7, 2026Updated last month
- vibevoice real time 0.5B swift portβ28Dec 12, 2025Updated 2 months ago
- ReBase: Training Task Experts through Retrieval Based Distillationβ29Feb 5, 2025Updated last year
- Train Llama Loras Easilyβ31Aug 3, 2023Updated 2 years ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.β3,371May 19, 2025Updated 9 months ago
- Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"β497Mar 17, 2025Updated 11 months ago
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoningβ2,132Dec 12, 2025Updated 2 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.β24,500Aug 12, 2024Updated last year
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.β2,085Jul 29, 2024Updated last year
- RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)β364Aug 31, 2024Updated last year
- Web-based tool to convert model into MyriadX blobβ16Dec 9, 2025Updated 3 months ago
- Reproduction of LLaVA-v1.5 based on Llama-3-8b LLM backbone.β65Oct 25, 2024Updated last year
- [CVPR 2024] Real-Time Open-Vocabulary Object Detectionβ6,227Feb 26, 2025Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.β2,915Updated this week
- β4,582Sep 14, 2025Updated 5 months ago
- Witness the aha moment of VLM with less than $3.β4,036May 19, 2025Updated 9 months ago
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β945Aug 5, 2025Updated 7 months ago