longkukuhi / RoboLLMLinks
☆23Updated last year
Alternatives and similar repositories for RoboLLM
Users that are interested in RoboLLM are comparing it to the libraries listed below
Sorting:
- ☆89Updated last year
- ☆56Updated last year
- Evaluate Multimodal LLMs as Embodied Agents☆55Updated 10 months ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆226Updated 8 months ago
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆88Updated 6 months ago
- ☆32Updated last year
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆79Updated 7 months ago
- MiniGrid Implementation of BEHAVIOR Tasks☆56Updated 3 months ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆134Updated last year
- ☆34Updated 2 years ago
- ☆78Updated 7 months ago
- Codebase for HiP☆90Updated 2 years ago
- ☆60Updated last year
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆112Updated 8 months ago
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`☆121Updated last year
- Official Implementation of ReALFRED (ECCV'24)☆44Updated last year
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆115Updated 4 months ago
- Code for "Interactive Task Planning with Language Models"☆32Updated 7 months ago
- This repository compiles a list of papers related to the application of video technology in the field of robotics! Star⭐ the repo and fol…☆170Updated 10 months ago
- ☆43Updated last year
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆82Updated 6 months ago
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆78Updated last year
- ☆61Updated 10 months ago
- Efficiently apply modification functions to RLDS/TFDS datasets.☆39Updated last year
- [CVPR 2024] Binding Touch to Everything: Learning Unified Multimodal Tactile Representations☆73Updated last month
- [ICRA 2025] RACER: Rich Language-Guided Failure Recovery Policies for Imitation Learning☆40Updated last year
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆153Updated 8 months ago
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆44Updated last year
- [ICML 2024] The offical Implementation of "DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning"☆82Updated 6 months ago
- NeurIPS 2022 Paper "VLMbench: A Compositional Benchmark for Vision-and-Language Manipulation"☆97Updated 7 months ago