yangchris11 / samuraiLinks
Official repository of "SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory"
☆7,032Updated 10 months ago
Alternatives and similar repositories for samurai
Users that are interested in samurai are comparing it to the libraries listed below
Sorting:
- [NeurIPS 2024] Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation☆7,451Updated 11 months ago
- A unified library for object tracking featuring clean room re-implementations of leading multi-object tracking algorithms☆2,206Updated this week
- CoTracker is a model for tracking any point (pixel) on a video.☆4,783Updated 11 months ago
- Depth Pro: Sharp Monocular Metric Depth in Less Than a Second.☆5,188Updated 8 months ago
- The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained mode…☆18,315Updated last year
- High-resolution models for human tasks.☆5,265Updated last year
- RF-DETR is a real-time object detection and segmentation model architecture developed by Roboflow, SOTA on COCO and designed for fine-tun…☆5,195Updated this week
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆3,217Updated 2 months ago
- [CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation☆7,955Updated last year
- YOLOE: Real-Time Seeing Anything [ICCV 2025]☆1,998Updated 6 months ago
- Official Implementation of CVPR24 highlight paper: Matching Anything by Segmenting Anything☆1,361Updated 8 months ago
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,323Updated 5 months ago
- [CVPR 2024] Real-Time Open-Vocabulary Object Detection☆6,154Updated 10 months ago
- streamline the fine-tuning process for multimodal models: PaliGemma 2, Florence-2, and Qwen2.5-VL☆2,655Updated this week
- [ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"☆9,585Updated last year
- Reference PyTorch implementation and models for DINOv3☆9,327Updated 2 months ago
- [ICCV 2023] Tracking Anything with Decoupled Video Segmentation☆1,474Updated 8 months ago
- The repository provides code for running inference and finetuning with the Meta Segment Anything Model 3 (SAM 3), links for downloading t…☆7,021Updated last week
- [NeurIPS 2025] SpatialLM: Training Large Language Models for Structured Indoor Modeling☆4,192Updated 3 months ago
- [CVPR 2024 - Oral, Best Paper Award Candidate] Marigold: Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation☆3,055Updated last month
- SANA: Efficient High-Resolution Image Synthesis with Linear Diffusion Transformer☆4,894Updated this week
- [CVPR 2025 Highlight] Video Depth Anything: Consistent Depth Estimation for Super-Long Videos☆1,691Updated 3 months ago
- Turn any computer or edge device into a command center for your computer vision projects.☆2,167Updated this week
- [CAAI AIR'24] Bilateral Reference for High-Resolution Dichotomous Image Segmentation☆3,092Updated last month
- Segment Anything in High Quality [NeurIPS 2023]☆4,160Updated 4 months ago
- Images to inference with no labeling (use foundation models to train supervised models).☆2,585Updated 8 months ago
- [CVPR 2024 Highlight] Putting the Object Back Into Video Object Segmentation☆997Updated last year
- A collection of tutorials on state-of-the-art computer vision models and techniques. Explore everything from foundational architectures l…☆9,098Updated this week
- Efficient vision foundation models for high-resolution generation and perception.☆3,202Updated 4 months ago
- New repo collection for NVIDIA Cosmos: https://github.com/nvidia-cosmos☆8,084Updated 2 weeks ago