manycore-research / SpatialLMLinks
SpatialLM: Training Large Language Models for Structured Indoor Modeling
☆3,352Updated last week
Alternatives and similar repositories for SpatialLM
Users that are interested in SpatialLM are comparing it to the libraries listed below
Sorting:
- Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,170Updated last month
- Open-source unified multimodal model☆4,204Updated this week
- [CVPR 2025] MASt3R-SLAM: Real-Time Dense SLAM with 3D Reconstruction Priors☆2,172Updated 3 months ago
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆2,328Updated 3 weeks ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,344Updated this week
- PyTorch code and models for VJEPA2 self-supervised learning from video.☆1,331Updated this week
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆1,270Updated 3 weeks ago
- Unifying 3D Mesh Generation with Language Models☆1,058Updated 2 months ago
- [CVPR 2025] Magma: A Foundation Model for Multimodal AI Agents☆1,712Updated 3 weeks ago
- RF-DETR is a real-time object detection model architecture developed by Roboflow, SOTA on COCO & designed for fine-tuning.☆2,254Updated last week
- A suite of image and video neural tokenizers☆1,638Updated 4 months ago
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆1,247Updated last week
- MAGI-1: Autoregressive Video Generation at Scale☆3,302Updated this week
- ☆6,453Updated last month
- New repo collection for NVIDIA Cosmos: https://github.com/nvidia-cosmos☆8,022Updated last week
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,086Updated 3 weeks ago
- [CVPR 2025 Highlight] Video Depth Anything: Consistent Depth Estimation for Super-Long Videos☆1,082Updated last month
- Depth Pro: Sharp Monocular Metric Depth in Less Than a Second.☆4,564Updated 2 months ago
- [CVPR 2025 Best Paper Award] VGGT: Visual Geometry Grounded Transformer☆8,311Updated last week
- Witness the aha moment of VLM with less than $3.☆3,768Updated last month
- A unified library for object tracking featuring clean room re-implementations of leading multi-object tracking algorithms☆1,765Updated this week
- From Images to High-Fidelity 3D Assets with Production-Ready PBR Material☆1,071Updated this week
- LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆2,011Updated last month
- Grounding Image Matching in 3D with MASt3R☆2,272Updated 3 weeks ago
- Official repository of "SAMURAI: Adapting Segment Anything Model for Zero-Shot Visual Tracking with Motion-Aware Memory"☆6,846Updated 3 months ago
- [CVPR 2025 Best Paper Nomination] FoundationStereo: Zero-Shot Stereo Matching☆1,718Updated last month
- Solve Visual Understanding with Reinforced VLMs☆5,159Updated last month
- Qwen2.5-Omni is an end-to-end multimodal model by Qwen team at Alibaba Cloud, capable of understanding text, audio, vision, video, and pe…☆3,186Updated last week
- Stable Virtual Camera: Generative View Synthesis with Diffusion Models☆1,323Updated 2 weeks ago
- The simplest, fastest repository for training/finetuning small-sized VLMs.☆3,418Updated this week