zrporz / AutoSeg-SAM2Links
This is an automatic full segmentation tool based on Segment-Anything-2 and Segment-Anything-1. Our tool performs automatic full segmentation of the video, enabling the tracking of each object and the detection of possible new objects.
☆224Updated 6 months ago
Alternatives and similar repositories for AutoSeg-SAM2
Users that are interested in AutoSeg-SAM2 are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] Code for Segment Any Motion in Videos☆455Updated 7 months ago
- Orient Anything, ICML 2025☆372Updated 3 months ago
- [3DV 2026] SpatialGen: Layout-guided 3D Indoor Scene Generation☆346Updated last week
- [ICCV 2025] DSO: Aligning 3D Generators with Simulation Feedback for Physical Soundness☆169Updated 10 months ago
- Source code of paper "NVS-Solver: Video Diffusion Model as Zero-Shot Novel View Synthesizer"☆311Updated 10 months ago
- Generative Camera Dolly: Extreme Monocular Dynamic Novel View Synthesis (ECCV 2024 Oral) - Official Implementation☆281Updated 2 months ago
- [ICCV 2025] LangScene-X: Reconstruct Generalizable 3D Language-Embedded Scenes with TriMap Video Diffusion☆296Updated 6 months ago
- GenXD: Generating Any 3D and 4D Scenes. ICLR 2025☆219Updated 10 months ago
- [ICLR 2025] Official implementation of "DiffSplat: Repurposing Image Diffusion Models for Scalable 3D Gaussian Splat Generation".☆466Updated 5 months ago
- [CVPR 2025] Official code for Using Diffusion Priors for Video Amodal Segmentation☆109Updated 2 months ago
- [CVPR 2025 Highlight] VideoScene: Distilling Video Diffusion Model to Generate 3D Scenes in One Step☆340Updated 7 months ago
- [CVPR2024 Oral] EscherNet: A Generative Model for Scalable View Synthesis☆365Updated last year
- SpatialVID: A Large-Scale Video Dataset with Spatial Annotations☆486Updated last week
- The official implementation of "GaussianCity: Generative Gaussian Splatting for Unbounded 3D City Generation". (CVPR 2025)☆315Updated 6 months ago
- [ICLR2025] GenPercept: Diffusion Models Trained with Large Data Are Transferable Visual Models☆219Updated last year
- [ICCV 2025] Free4D: Tuning-free 4D Scene Generation with Spatial-Temporal Consistency☆235Updated 3 months ago
- Code for "Director3D: Real-world Camera Trajectory and 3D Scene Generation from Text" (NeurIPS 2024).☆370Updated 10 months ago
- [ECCV 2024] Improving 2D Feature Representations by 3D-Aware Fine-Tuning☆309Updated last month
- [NeurIPS 2024] Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models☆333Updated last year
- [ICLR 2025] 3DitScene: Editing Any Scene via Language-guided Disentangled Gaussian Splatting☆259Updated last year
- PhysX: Physical-Grounded 3D Asset Generation (NeurIPS 2025, Spotlight)☆352Updated last month
- Code for the paper "pix2gestalt: Amodal Segmentation by Synthesizing Wholes" (CVPR 2024)☆194Updated 7 months ago
- "4DGen: Grounded 4D Content Generation with Spatial-temporal Consistency", Yuyang Yin*, Dejia Xu*, Zhangyang Wang, Yao Zhao, Yunchao Wei☆248Updated last year
- We have released official implementation in https://github.com/VAST-AI-Research/MIDI-3D☆127Updated 10 months ago
- [ICCV 2025] GeometryCrafter: Consistent Geometry Estimation for Open-world Videos with Diffusion Priors☆423Updated 4 months ago
- Pytorch Implementation of "SMITE: Segment Me In TimE" (ICLR 2025)☆212Updated 2 months ago
- High-quality and editable surfel 3D Gaussian generation through native 3D diffusion (ICLR 2025)☆394Updated 8 months ago
- The official source code for "X-Ray: A Sequential 3D Representation for Generation".☆114Updated 10 months ago
- Official repository for "Build-A-Scene: Interactive 3D Layout Control for Diffusion-Based Image Generation" (ICLR2025)☆72Updated 9 months ago
- ☆191Updated last month