2644521362 / SC-MLLM
☆18Updated 5 months ago
Related projects ⓘ
Alternatives and complementary repositories for SC-MLLM
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆72Updated 2 months ago
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)☆84Updated 4 months ago
- [ICCV'23] Learning Vision-and-Language Navigation from YouTube Videos☆41Updated last year
- Official implementation of GR-MG☆41Updated 2 weeks ago
- Aligning Knowledge Graph with Visual Perception for Object-goal Navigation (ICRA 2024)☆24Updated 8 months ago
- ☆35Updated last month
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆42Updated 6 months ago
- Public release for "Explore until Confident: Efficient Exploration for Embodied Question Answering"☆34Updated 4 months ago
- ☆32Updated last year
- ☆29Updated 2 weeks ago
- ☆26Updated last month
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆88Updated last month
- [ICRA2023] Grounding Language with Visual Affordances over Unstructured Data☆36Updated last year
- [ECCV 2024] Official implementation of C-Instructor: Controllable Navigation Instruction Generation with Chain of Thought Prompting☆19Updated 4 months ago
- ☆11Updated 11 months ago
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆61Updated 3 weeks ago
- [RSS 2024] Learning Manipulation by Predicting Interaction☆89Updated 2 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆58Updated 5 months ago
- Reimplementation of GR-1, a generalized policy for robotics manipulation.☆91Updated 2 months ago
- ☆36Updated last week
- ☆41Updated 7 months ago
- ☆35Updated 3 months ago
- ☆18Updated 4 months ago
- Official implementation of Learning from Unlabeled 3D Environments for Vision-and-Language Navigation (ECCV'22).☆33Updated last year
- [CoRL 2023] XSkill: cross embodiment skill discovery☆50Updated 7 months ago
- Find What You Want: Learning Demand-conditioned Object Attribute Space for Demand-driven Navigation☆46Updated 6 months ago
- MOKA: Open-World Robotic Manipulation through Mark-based Visual Prompting (RSS 2024)☆43Updated 3 months ago
- Repository for Vision-and-Language Navigation via Causal Learning (Accepted by CVPR 2024)☆31Updated last week
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆80Updated last year
- ☆32Updated last month