Koorye / InspireLinks
Official implemetation of the paper "InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning"
☆29Updated 2 weeks ago
Alternatives and similar repositories for Inspire
Users that are interested in Inspire are comparing it to the libraries listed below
Sorting:
- [CVPR 2025] Tra-MoE: Learning Trajectory Prediction Model from Multiple Domains for Adaptive Policy Conditioning☆36Updated 2 months ago
- ☆95Updated last month
- [Arxiv 2025: MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation]☆36Updated 2 months ago
- Latent Motion Token as the Bridging Language for Robot Manipulation☆105Updated last month
- ☆54Updated 4 months ago
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆65Updated this week
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆68Updated last month
- [CVPR 2025] Official PyTorch Implementation of GLUS: Global-Local Reasoning Unified into A Single Large Language Model for Video Segmenta…☆43Updated last week
- ☆46Updated 6 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆83Updated 2 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆95Updated 4 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆126Updated 6 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆67Updated 6 months ago
- [ECCV 2024] Official implementation of C-Instructor: Controllable Navigation Instruction Generation with Chain of Thought Prompting☆23Updated 6 months ago
- Embodied Question Answering (EQA) benchmark and method (ICCV 2025)☆22Updated this week
- official repo for AGNOSTOS, a cross-task manipulation benchmark, and X-ICM method, a cross-task in-context manipulation (VLA) method☆29Updated last month
- ☆41Updated 8 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆130Updated 2 months ago
- Official implemetation of the paper "Policy Contrastive Decoding for Robotic Foundation Models"☆16Updated 3 weeks ago
- ☆49Updated 8 months ago
- Official implementation of SAME: Learning Generic Language-Guided Visual Navigation with State-Adaptive Mixture of Experts☆18Updated 6 months ago
- Repository for Vision-and-Language Navigation via Causal Learning (Accepted by CVPR 2024)☆75Updated 3 weeks ago
- ☆71Updated last month
- A comprehensive list of papers about dual-system VLA models, including papers, codes, and related websites.☆43Updated 3 weeks ago
- [arXiv 2025] MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence☆37Updated last week
- [Actively Maintained🔥] LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model☆75Updated last week
- [ICLR2025] Official code implementation of Video-UTR: Unhackable Temporal Rewarding for Scalable Video MLLMs☆54Updated 4 months ago
- Official implementation of paper "VLA-Cache: Towards Efficient Vision-Language-Action Model via Adaptive Token Caching in Robotic Manipul…☆11Updated last week
- ☆25Updated last year
- SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆165Updated last month