AssafSinger94 / dino-trackerLinks
Official Pytorch Implementation for “DINO-Tracker: Taming DINO for Self-Supervised Point Tracking in a Single Video” (ECCV 2024)
☆504Updated 9 months ago
Alternatives and similar repositories for dino-tracker
Users that are interested in dino-tracker are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] "A Distractor-Aware Memory for Visual Object Tracking with SAM2"☆369Updated last month
- [ACCV 2024 (Oral)] Official Implementation of "Moving Object Segmentation: All You Need Is SAM (and Flow)" Junyu Xie, Charig Yang, Weidi …☆312Updated 8 months ago
- [ICLR 2025 oral] RMP-SAM: Towards Real-Time Multi-Purpose Segment Anything☆259Updated 4 months ago
- Efficient Track Anything☆620Updated 7 months ago
- [ECCV 2024 & NeurIPS 2024] Official implementation of the paper TAPTR & TAPTRv2 & TAPTRv3☆266Updated 8 months ago
- Muggled SAM: Segmentation without the magic☆153Updated 4 months ago
- ☆129Updated 4 months ago
- Depth Any Video with Scalable Synthetic Data (ICLR 2025)☆497Updated 8 months ago
- [NeurIPS 2024] Code release for "Segment Anything without Supervision"☆479Updated 2 months ago
- [CVPR 2025] RollingDepth: Video Depth without Video Models☆566Updated 5 months ago
- Official implementation of "Local All-Pair Correspondence for Point Tracking" (ECCV 2024)☆193Updated 4 months ago
- Official Implementation of CVPR24 highlight paper: Matching Anything by Segmenting Anything☆1,331Updated 3 months ago
- Code release for CVPR'24 submission 'OmniGlue'☆670Updated last year
- A Graph-Based Approach for Category-Agnostic Pose Estimation [ECCV 2024]☆371Updated 8 months ago
- Dense Optical Tracking: Connecting the Dots☆301Updated 9 months ago
- [ICLR'24] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching☆510Updated 8 months ago
- [AAAI 2025, Oral] DepthFM: Fast Monocular Depth Estimation with Flow Matching☆632Updated 3 months ago
- Official Code for "MITracker: Multi-View Integration for Visual Object Tracking"☆102Updated 2 months ago
- Pytorch Implementation of "SMITE: Segment Me In TimE" (ICLR 2025)☆211Updated 4 months ago
- Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"☆451Updated 5 months ago
- Universal Monocular Metric Depth Estimation☆990Updated 3 months ago
- [ECCV2024 - Oral, Best Paper Award Candidate] SEA-RAFT: Simple, Efficient, Accurate RAFT for Optical Flow☆537Updated last month
- RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)☆358Updated 11 months ago
- Grounded Tracking for Streaming Videos☆115Updated 10 months ago
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,178Updated last month
- Official implementation for HybridDepth Model [WACV 2025, ISMAR 2024]☆167Updated last month
- The repo for "Distill Any Depth: Distillation Creates a Stronger Monocular Depth Estimator"☆634Updated 4 months ago
- Code for "MatchAnything: Universal Cross-Modality Image Matching with Large-Scale Pre-Training", Arxiv 2025.☆992Updated 3 weeks ago
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"☆488Updated last year
- [CVPR 2025] Prompt Depth Anything☆891Updated 5 months ago