roboflow / rf-detrLinks
[ICLR 2026] RF-DETR is a real-time object detection and segmentation model architecture developed by Roboflow, SOTA on COCO, designed for fine-tuning.
☆5,527Updated this week
Alternatives and similar repositories for rf-detr
Users that are interested in rf-detr are comparing it to the libraries listed below
Sorting:
- D-FINE: Redefine Regression Task of DETRs as Fine-grained Distribution Refinement [ICLR 2025 Spotlight]☆3,008Updated last month
- YOLOE: Real-Time Seeing Anything [ICCV 2025]☆2,029Updated 7 months ago
- Trackers gives you clean, modular re-implementations of leading multi-object tracking algorithms released under the permissive Apache 2.0…☆2,389Updated this week
- [NeurIPS 2025] YOLOv12: Attention-Centric Real-Time Object Detectors☆2,787Updated 4 months ago
- An MIT License of YOLOv9, YOLOv7, YOLO-RD☆1,579Updated last month
- [DEIMv2] Real Time Object Detection Meets DINOv3☆1,463Updated last month
- Images to inference with no labeling (use foundation models to train supervised models).☆2,616Updated 8 months ago
- The repository provides code for running inference and finetuning with the Meta Segment Anything Model 3 (SAM 3), links for downloading t…☆7,665Updated last week
- Turn any computer or edge device into a command center for your computer vision projects.☆2,185Updated this week
- [ECCV2024] API code for T-Rex2: Towards Generic Object Detection via Text-Visual Prompt Synergy☆2,630Updated 3 months ago
- [CVPR 2024] Official RT-DETR (RTDETR paddle pytorch), Real-Time DEtection TRansformer, DETRs Beat YOLOs on Real-time Object Detection. 🔥…☆4,841Updated 2 months ago
- [CVPR 2025] DEIM: DETR with Improved Matching for Fast Convergence☆1,428Updated 4 months ago
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,334Updated 6 months ago
- Official Implementation of CVPR24 highlight paper: Matching Anything by Segmenting Anything☆1,362Updated 9 months ago
- Effortless AI-assisted data labeling with AI support from YOLO, Segment Anything (SAM+SAM2), MobileSAM!!☆3,178Updated last month
- Reference PyTorch implementation and models for DINOv3☆9,525Updated 2 months ago
- Framework agnostic sliced/tiled inference + interactive ui + error analysis plots☆5,096Updated last week
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆3,265Updated 3 months ago
- [CVPR 2025] Official PyTorch implementation of "EdgeTAM: On-Device Track Anything Model"☆871Updated 2 weeks ago
- Darknet/YOLO object detection framework☆758Updated last week
- All-in-one training for vision models (YOLO, ViTs, RT-DETR, DINOv3): pretraining, fine-tuning, distillation.☆1,311Updated this week
- Gaze-LLE: Gaze Target Estimation via Large-Scale Learned Encoders (CVPR 2025, Highlight)☆819Updated 9 months ago
- Official repository of "SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory"☆7,042Updated 10 months ago
- [NeurIPS 2025] SpatialLM: Training Large Language Models for Structured Indoor Modeling☆4,231Updated 4 months ago
- [CVPR 2024] Real-Time Open-Vocabulary Object Detection☆6,198Updated 11 months ago
- streamline the fine-tuning process for multimodal models: PaliGemma 2, Florence-2, and Qwen2.5-VL☆2,660Updated this week
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,448Updated 7 months ago
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆1,083Updated last year
- Convert JSON annotations into YOLO format.☆1,183Updated 7 months ago
- Whereabouts Ascertainment for Low-lying Detectable Objects. The SOTA in FOSS AI for drones!☆1,671Updated last year