roboflow / maestroLinks
streamline the fine-tuning process for multimodal models: PaliGemma 2, Florence-2, and Qwen2.5-VL
β2,604Updated last week
Alternatives and similar repositories for maestro
Users that are interested in maestro are comparing it to the libraries listed below
Sorting:
- Recipes for shrinking, optimizing, customizing cutting edge vision models. πβ1,546Updated 2 weeks ago
- Turn any computer or edge device into a command center for your computer vision projects.β1,841Updated this week
- 4M: Massively Multimodal Masked Modelingβ1,756Updated 2 months ago
- Must-have resource for anyone who wants to experiment with and build on the OpenAI vision API π₯β1,683Updated 6 months ago
- RF-DETR is a real-time object detection model architecture developed by Roboflow, SOTA on COCO and designed for fine-tuning.β2,696Updated this week
- ποΈ + π¬ + π§ = π€ Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]β627Updated last year
- Images to inference with no labeling (use foundation models to train supervised models).β2,359Updated 2 months ago
- The easiest way to deploy agents, MCP servers, models, RAG, pipelines and more. No MLOps. No YAML.β3,490Updated this week
- This repository is a curated collection of the most exciting and influential CVPR 2024 papers. π₯ [Paper + Code + Demo]β736Updated 2 months ago
- Everything about the SmolLM and SmolVLM family of modelsβ3,108Updated last week
- A Python package that makes it easy for developers to create AI apps powered by various AI providers.β1,627Updated 4 months ago
- This series will take you on a journey from the fundamentals of NLP and Computer Vision to the cutting edge of Vision-Language Models.β1,117Updated 6 months ago
- A unified library for object tracking featuring clean room re-implementations of leading multi-object tracking algorithmsβ1,899Updated this week
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clouβ¦β3,466Updated this week
- mPLUG-DocOwl: Modularized Multimodal Large Language Model for Document Understandingβ2,234Updated 2 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.β1,346Updated last week
- Vision agentβ4,994Updated last week
- [ECCV2024] API code for T-Rex2: Towards Generic Object Detection via Text-Visual Prompt Synergyβ2,546Updated last week
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoningβ2,047Updated 3 weeks ago
- Fast State-of-the-Art Static Embeddingsβ1,786Updated last week
- tiny vision language modelβ8,275Updated last week
- YOLOE: Real-Time Seeing Anything [ICCV 2025]β1,581Updated last month
- [arXiv 2023] Set-of-Mark Prompting for GPT-4V and LMMsβ1,438Updated 11 months ago
- Colivara is a suite of services that allows you to store, search, and retrieve documents based on their visual embedding. ColiVara has stβ¦β1,214Updated 3 months ago
- β712Updated last year
- The code used to train and run inference with the ColVision models, e.g. ColPali, ColQwen2, and ColSmol.β2,125Updated this week
- β1,935Updated this week
- Use late-interaction multi-modal models such as ColPali in just a few lines of code.β807Updated 6 months ago
- This repository contains the official implementation of the research paper, "MobileCLIP: Fast Image-Text Models through Multi-Modal Reinfβ¦β1,016Updated 8 months ago
- π€ MLE-Agent: Your intelligent companion for seamless AI engineering and research. π Integrate with arxiv and paper with code to provideβ¦β1,349Updated 2 weeks ago