CoTracker is a model for tracking any point (pixel) on a video.
☆4,875Mar 3, 2026Updated 2 weeks ago
Alternatives and similar repositories for co-tracker
Users that are interested in co-tracker are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Tracking Any Point (TAP)☆1,820Jan 22, 2026Updated 2 months ago
- ☆2,264Jun 11, 2024Updated last year
- [CVPR 2024 Highlight] Official PyTorch implementation of SpatialTracker: Tracking Any 2D Pixels in 3D Space☆1,040Aug 8, 2025Updated 7 months ago
- The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained mode…☆18,737Updated this week
- DUSt3R: Geometric 3D Vision Made Easy☆7,031Sep 24, 2025Updated 5 months ago
- ☆1,242Aug 2, 2025Updated 7 months ago
- Official Implementation of paper "MonST3R: A Simple Approach for Estimating Geometry in the Presence of Motion"☆1,344Jun 16, 2025Updated 9 months ago
- VGGSfM: Visual Geometry Grounded Deep Structure From Motion☆1,369Mar 11, 2025Updated last year
- Grounding Image Matching in 3D with MASt3R☆2,797Jun 30, 2025Updated 8 months ago
- PyTorch code and models for the DINOv2 self-supervised learning method.☆12,553Mar 12, 2026Updated last week
- Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI…☆6,942Dec 13, 2025Updated 3 months ago
- [CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation☆8,034Jul 17, 2024Updated last year
- [CVPR 2024 - Oral, Best Paper Award Candidate] Marigold: Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation☆3,102Dec 10, 2025Updated 3 months ago
- Official Pytorch Implementation for “DINO-Tracker: Taming DINO for Self-Supervised Point Tracking in a Single Video ” (ECCV 2024)☆553Nov 23, 2024Updated last year
- [ECCV 2024 & NeurIPS 2024 & ICLR 2026] Official implementation of the paper TAPTR & TAPTRv2 & TAPTRv3☆275Feb 10, 2026Updated last month
- Particle Video Revisited☆597Sep 18, 2023Updated 2 years ago
- PIPs++☆319Jul 8, 2024Updated last year
- LightGlue: Local Feature Matching at Light Speed (ICCV 2023)☆4,422Feb 18, 2026Updated last month
- Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and …☆17,464Sep 5, 2024Updated last year
- Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"☆21,018Oct 17, 2025Updated 5 months ago
- Official implementation of Continuous 3D Perception Model with Persistent State☆1,368Aug 27, 2025Updated 6 months ago
- The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoi…☆53,684Sep 18, 2024Updated last year
- SAM-PT: Extending SAM to zero-shot video segmentation with point-based tracking.☆1,036Jan 27, 2024Updated 2 years ago
- [CVPR 2025 Best Paper Award] VGGT: Visual Geometry Grounded Transformer☆12,641Mar 3, 2026Updated 2 weeks ago
- [CVPR'25 Oral] MoGe: Unlocking Accurate Monocular Geometry Estimation for Open-Domain Images with Optimal Training Supervision☆2,367Nov 2, 2025Updated 4 months ago
- Code for the project "MegaSaM: Accurate, Fast and Robust Structure and Motion from Casual Dynamic Videos"☆1,256Jan 5, 2026Updated 2 months ago
- ☆2,256Dec 22, 2023Updated 2 years ago
- [ECCV2024 - Oral, Best Paper Award Candidate] SEA-RAFT: Simple, Efficient, Accurate RAFT for Optical Flow☆627Jun 29, 2025Updated 8 months ago
- An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary alg…☆3,114Mar 13, 2026Updated last week
- Dense Optical Tracking: Connecting the Dots☆321Nov 19, 2024Updated last year
- [ICCV 2023] Tracking Anything with Decoupled Video Segmentation☆1,488Apr 26, 2025Updated 10 months ago
- Segment Anything in High Quality [NeurIPS 2023]☆4,193Sep 12, 2025Updated 6 months ago
- [NeurIPS 2024] Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation☆7,764Jan 22, 2025Updated last year
- [TPAMI'23] Unifying Flow, Stereo and Depth Estimation☆1,360Jan 4, 2025Updated last year
- Depth Pro: Sharp Monocular Metric Depth in Less Than a Second.☆5,374Apr 21, 2025Updated 11 months ago
- A collaboration friendly studio for NeRFs☆11,329Jul 29, 2025Updated 7 months ago
- Official implementation of "Local All-Pair Correspondence for Point Tracking" (ECCV 2024)☆207Apr 16, 2025Updated 11 months ago
- [CVPR 2024] 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering☆3,454Oct 27, 2024Updated last year
- [CVPR 2025 Highlight] Video Depth Anything: Consistent Depth Estimation for Super-Long Videos☆1,802Oct 7, 2025Updated 5 months ago