andimarafioti / florence2-finetuningView external linksLinks
Quick exploration into fine tuning florence 2
β339Sep 19, 2024Updated last year
Alternatives and similar repositories for florence2-finetuning
Users that are interested in florence2-finetuning are comparing it to the libraries listed below
Sorting:
- Recipes for shrinking, optimizing, customizing cutting edge vision models. πβ1,875Jan 9, 2026Updated last month
- Notebooks for fine tuning pali gemmaβ117Apr 15, 2025Updated 9 months ago
- Use Florence 2 to auto-label data for use in training fine-tuned object detection models.β69Aug 15, 2024Updated last year
- Florence-2 is a novel vision foundation model with a unified, prompt-based representation for a variety of computer vision and vision-lanβ¦β150Jul 3, 2024Updated last year
- utilities for loading and running text embeddings with onnxβ45Aug 16, 2025Updated 5 months ago
- A family of lightweight multimodal models.β1,051Nov 18, 2024Updated last year
- Real-time object detection using Florence-2 with a user-friendly GUI.β30Aug 7, 2025Updated 6 months ago
- Florence-2β72Feb 13, 2025Updated last year
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.β1,985Nov 7, 2025Updated 3 months ago
- 4M: Massively Multimodal Masked Modelingβ1,789Jun 2, 2025Updated 8 months ago
- Load any clip model with a standardized interfaceβ22Oct 20, 2025Updated 3 months ago
- A Dead Simple and Modularized Multi-Modal Training and Finetune Framework. Compatible to any LLaVA/Flamingo/QwenVL/MiniGemini etc series β¦β19Apr 24, 2024Updated last year
- [ICCVW 25] LLaVA-MORE: A Comparative Study of LLMs and Visual Backbones for Enhanced Visual Instruction Tuningβ159Aug 8, 2025Updated 6 months ago
- A list of language models with permissive licenses such as MIT or Apache 2.0β24Feb 28, 2025Updated 11 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning" [TMLR 2024 Best Paper]β239Jan 3, 2026Updated last month
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.β39Jun 20, 2024Updated last year
- streamline the fine-tuning process for multimodal models: PaliGemma 2, Florence-2, and Qwen2.5-VLβ2,660Updated this week
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. ζ₯θΏGPT-4o葨η°ηεΌζΊε€ζ¨‘ζε―Ήθ―樑εβ9,792Sep 22, 2025Updated 4 months ago
- β29Aug 19, 2024Updated last year
- β193Jun 3, 2025Updated 8 months ago
- A Framework of Small-scale Large Multimodal Modelsβ960Feb 7, 2026Updated last week
- vibevoice real time 0.5B swift portβ27Dec 12, 2025Updated 2 months ago
- To convert sdxl checkpoint to diffusers, need kohya-ss/sd-scripts as a core to make it work.β13Sep 20, 2023Updated 2 years ago
- Prompts and evaluation data for LLMs on real world coding and writing tasksβ16Sep 13, 2025Updated 5 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.β3,355May 19, 2025Updated 8 months ago
- Train Llama Loras Easilyβ31Aug 3, 2023Updated 2 years ago
- Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"β495Mar 17, 2025Updated 10 months ago
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoningβ2,124Dec 12, 2025Updated 2 months ago
- [CVPR 2024] Real-Time Open-Vocabulary Object Detectionβ6,208Feb 26, 2025Updated 11 months ago
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.β24,446Aug 12, 2024Updated last year
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.β2,085Jul 29, 2024Updated last year
- RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)β364Aug 31, 2024Updated last year
- Reproduction of LLaVA-v1.5 based on Llama-3-8b LLM backbone.β65Oct 25, 2024Updated last year
- Web-based tool to convert model into MyriadX blobβ16Dec 9, 2025Updated 2 months ago
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.β2,885Updated this week
- β4,552Sep 14, 2025Updated 5 months ago
- Witness the aha moment of VLM with less than $3.β4,029May 19, 2025Updated 8 months ago
- Solve Visual Understanding with Reinforced VLMsβ5,833Oct 21, 2025Updated 3 months ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024β1,812Nov 27, 2025Updated 2 months ago