InternRobotics / EmbodiedScanLinks
[CVPR 2024 & NeurIPS 2024] EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AI
☆646Updated 6 months ago
Alternatives and similar repositories for EmbodiedScan
Users that are interested in EmbodiedScan are comparing it to the libraries listed below
Sorting:
- [ICML 2024] Official code repository for 3D embodied generalist agent LEO☆471Updated 8 months ago
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆609Updated last year
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆325Updated 3 months ago
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆298Updated last year
- 😎 up-to-date & curated list of awesome 3D Visual Grounding papers, methods & resources.☆253Updated 3 weeks ago
- ☆222Updated 4 months ago
- [CVPR'25] SeeGround: See and Ground for Zero-Shot Open-Vocabulary 3D Visual Grounding☆198Updated 8 months ago
- Official implementation of the paper: "StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modeling"☆345Updated last month
- Official code for the CVPR 2025 paper "Navigation World Models".☆479Updated last month
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆897Updated last month
- Official repo and evaluation implementation of VSI-Bench☆655Updated 4 months ago
- [ICCV 2025] A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D World☆358Updated 2 months ago
- InternRobotics' open platform for building generalized navigation foundation models.☆502Updated last week
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆600Updated 6 months ago
- [ECCV 2024 Best Paper Candidate & TPAMI 2025] PointLLM: Empowering Large Language Models to Understand Point Clouds☆952Updated 4 months ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆388Updated last month
- [CVPR 2024] "LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning"; an interactive Large Langu…☆309Updated last year
- Compose multimodal datasets 🎹☆525Updated 4 months ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆319Updated last week
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.☆521Updated last year
- ☆479Updated last month
- RynnVLA-002: A Unified Vision-Language-Action and World Model☆790Updated 3 weeks ago
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆215Updated 5 months ago
- [TMLR 2024] repository for VLN with foundation models☆228Updated 2 months ago
- A curated list of 3D Vision papers relating to Robotics domain in the era of large models i.e. LLMs/VLMs, inspired by awesome-computer-vi…☆784Updated last week
- Official code release for ConceptGraphs☆739Updated 2 months ago
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆595Updated last week
- ☆416Updated 3 weeks ago
- Code for "Chat-Scene: Bridging 3D Scene and Large Language Models with Object Identifiers" (NeurIPS 2024)☆201Updated 2 months ago
- [RSS 2024 & RSS 2025] VLN-CE evaluation code of NaVid and Uni-NaVid☆340Updated 2 months ago