UMass-Foundation-Model / MultiPLY
Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World
☆124Updated 3 months ago
Alternatives and similar repositories for MultiPLY:
Users that are interested in MultiPLY are comparing it to the libraries listed below
- [ICLR 2023] SQA3D for embodied scene understanding and reasoning☆124Updated last year
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆76Updated 2 weeks ago
- ☆43Updated 2 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆98Updated last month
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆66Updated last week
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆104Updated 4 months ago
- ☆48Updated last month
- [NeurIPS 2024] CLOVER: Closed-Loop Visuomotor Control with Generative Expectation for Robotic Manipulation☆95Updated 2 months ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆83Updated last month
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding☆82Updated 2 months ago
- ☆44Updated 10 months ago
- [RSS 2024] Learning Manipulation by Predicting Interaction☆100Updated 6 months ago
- ☆29Updated 4 months ago
- Official implementation of GR-MG☆70Updated last month
- ☆48Updated 4 months ago
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆64Updated 4 months ago
- Latent Motion Token as the Bridging Language for Robot Manipulation☆72Updated last week
- 🔥[ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆187Updated 2 weeks ago
- ☆60Updated 4 months ago
- Public release for "Explore until Confident: Efficient Exploration for Embodied Question Answering"☆42Updated 7 months ago
- ☆91Updated 6 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆43Updated 10 months ago
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆84Updated 5 months ago
- ☆62Updated 5 months ago
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)☆110Updated 7 months ago
- [NeurIPS 2024] Official code repository for MSR3D paper☆37Updated 2 weeks ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆164Updated last week