manycore-research / SpatialLMLinks
SpatialLM: Training Large Language Models for Structured Indoor Modeling
☆3,558Updated 2 weeks ago
Alternatives and similar repositories for SpatialLM
Users that are interested in SpatialLM are comparing it to the libraries listed below
Sorting:
- PyTorch code and models for VJEPA2 self-supervised learning from video.☆1,972Updated last month
- [CVPR 2025] MASt3R-SLAM: Real-Time Dense SLAM with 3D Reconstruction Priors☆2,403Updated 4 months ago
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,299Updated last month
- New repo collection for NVIDIA Cosmos: https://github.com/nvidia-cosmos☆8,060Updated last month
- [CVPR 2025] Magma: A Foundation Model for Multimodal AI Agents☆1,765Updated 2 months ago
- YOLOE: Real-Time Seeing Anything [ICCV 2025]☆1,508Updated last month
- ☆4,173Updated last week
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆1,473Updated 3 weeks ago
- [CVPR 2025 Best Paper Award] VGGT: Visual Geometry Grounded Transformer☆10,298Updated this week
- Open-source unified multimodal model☆4,687Updated last month
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆3,423Updated 4 months ago
- Depth Pro: Sharp Monocular Metric Depth in Less Than a Second.☆4,719Updated 3 months ago
- [CVPR'25 Oral] MoGe: Unlocking Accurate Monocular Geometry Estimation for Open-Domain Images with Optimal Training Supervision☆1,490Updated this week
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,460Updated 2 weeks ago
- [CVPR 2025 Highlight] Video Depth Anything: Consistent Depth Estimation for Super-Long Videos☆1,267Updated last month
- Generating Immersive, Explorable, and Interactive 3D Worlds from Words or Pixels with Hunyuan3D World Model☆1,570Updated this week
- Unifying 3D Mesh Generation with Language Models☆1,082Updated 4 months ago
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,160Updated 2 weeks ago
- From Images to High-Fidelity 3D Assets with Production-Ready PBR Material☆1,782Updated this week
- A unified library for object tracking featuring clean room re-implementations of leading multi-object tracking algorithms☆1,891Updated this week
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆2,532Updated 2 months ago
- The simplest, fastest repository for training/finetuning small-sized VLMs.☆3,830Updated last week
- A suite of image and video neural tokenizers☆1,659Updated 5 months ago
- [CVPR 2025 Best Paper Nomination] FoundationStereo: Zero-Shot Stereo Matching☆1,936Updated 3 weeks ago
- [SIGGRAPH Asia 2023 (Technical Communications)] EasyVolcap: Accelerating Neural Volumetric Video Research☆1,438Updated 6 months ago
- [NeurIPS 2024] Depth Anything V2. A More Capable Foundation Model for Monocular Depth Estimation☆6,163Updated 6 months ago
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆2,043Updated 2 weeks ago
- Witness the aha moment of VLM with less than $3.☆3,882Updated 2 months ago
- Official repository of "SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory"☆6,893Updated 4 months ago
- TripoSG: High-Fidelity 3D Shape Synthesis using Large-Scale Rectified Flow Models☆1,328Updated 3 months ago