facebookresearch / EdgeTAMLinks
[CVPR 2025] Official PyTorch implementation of "EdgeTAM: On-Device Track Anything Model"
☆549Updated 3 months ago
Alternatives and similar repositories for EdgeTAM
Users that are interested in EdgeTAM are comparing it to the libraries listed below
Sorting:
- Efficient Track Anything☆623Updated 7 months ago
- The repo for "Distill Any Depth: Distillation Creates a Stronger Monocular Depth Estimator"☆636Updated 4 months ago
- [CVPR 2025] Code for Segment Any Motion in Videos☆404Updated 2 months ago
- [CVPR 2025] RollingDepth: Video Depth without Video Models☆568Updated 5 months ago
- Official code of "EVF-SAM: Early Vision-Language Fusion for Text-Prompted Segment Anything Model"☆454Updated 5 months ago
- SeC: Advancing Complex Video Object Segmentation via Progressive Concept Construction☆141Updated 2 weeks ago
- [CVPR 2025 Highlight] GEN3C: 3D-Informed World-Consistent Video Generation with Precise Camera Control☆974Updated this week
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,318Updated 2 months ago
- [ECCV 2024 & NeurIPS 2024] Official implementation of the paper TAPTR & TAPTRv2 & TAPTRv3☆266Updated 8 months ago
- Official implementation of Lotus: Diffusion-based Visual Foundation Model for High-quality Dense Prediction☆717Updated 4 months ago
- [CVPR 2025 Highlight] DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos☆1,419Updated last month
- [CVPR25] Official repository for the paper: "SAMWISE: Infusing Wisdom in SAM2 for Text-Driven Video Segmentation"☆307Updated 2 months ago
- Depth Any Video with Scalable Synthetic Data (ICLR 2025)☆502Updated 8 months ago
- About This repository is a curated collection of the most exciting and influential CVPR 2025 papers. 🔥 [Paper + Code + Demo]☆767Updated 2 months ago
- ☆235Updated 3 months ago
- RobustSAM: Segment Anything Robustly on Degraded Images (CVPR 2024 Highlight)☆358Updated 11 months ago
- ☆44Updated 6 months ago
- [CVPR 2025 Highlight] Video Depth Anything: Consistent Depth Estimation for Super-Long Videos☆1,345Updated this week
- Pytorch Implementation of "SMITE: Segment Me In TimE" (ICLR 2025)☆211Updated 4 months ago
- [CVPR 2025] "A Distractor-Aware Memory for Visual Object Tracking with SAM2"☆371Updated 2 months ago
- ZIM: Zero-Shot Image Matting for Anything☆341Updated 9 months ago
- Stable Virtual Camera: Generative View Synthesis with Diffusion Models☆1,424Updated 2 months ago
- Muggled SAM: Segmentation without the magic☆154Updated last week
- [WACV'25 Oral] Fine-Tuning Image-Conditional Diffusion Models is Easier than You Think☆464Updated 8 months ago
- Use Segment Anything 2, grounded with Florence-2, to auto-label data for use in training vision models.☆127Updated last year
- DiffuEraser is a diffusion model for video inpainting, which performs great content completeness and temporal consistency while maintaini…☆507Updated 4 months ago
- [ICCV 2025, Oral] TrajectoryCrafter: Redirecting Camera Trajectory for Monocular Videos via Diffusion Models☆742Updated 3 weeks ago
- Official Pytorch Implementation for “DINO-Tracker: Taming DINO for Self-Supervised Point Tracking in a Single Video” (ECCV 2024)☆505Updated 9 months ago
- [ACCV 2024 (Oral)] Official Implementation of "Moving Object Segmentation: All You Need Is SAM (and Flow)" Junyu Xie, Charig Yang, Weidi …☆313Updated 8 months ago
- Open source repo for Locate 3D Model, 3D-JEPA and Locate 3D Dataset☆360Updated 2 months ago