OpenGVLab / VeBrainLinks
Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces
☆80Updated 3 months ago
Alternatives and similar repositories for VeBrain
Users that are interested in VeBrain are comparing it to the libraries listed below
Sorting:
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆74Updated 4 months ago
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆116Updated last month
- OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding☆59Updated last month
- OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models☆65Updated last month
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆149Updated last month
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆72Updated last week
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆66Updated 3 weeks ago
- ☆41Updated 3 months ago
- Unified Vision-Language-Action Model☆190Updated 2 months ago
- Multi-SpatialMLLM Multi-Frame Spatial Understanding with Multi-Modal Large Language Models☆152Updated 3 months ago
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆61Updated 5 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆87Updated 6 months ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆70Updated 3 months ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆127Updated 4 months ago
- 🦾 A Dual-System VLA with System2 Thinking☆108Updated 3 weeks ago
- ☆25Updated last month
- ☆56Updated 6 months ago
- ☆80Updated last year
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆130Updated 10 months ago
- Official implementation of CEED-VLA: Consistency Vision-Language-Action Model with Early-Exit Decoding.☆33Updated this week
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆150Updated 2 weeks ago
- The offical repo for paper "VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers" (ICCV 2025)☆78Updated last month
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆83Updated 3 months ago
- Egocentric Video Understanding Dataset (EVUD)☆31Updated last year
- Code for Stable Control Representations☆25Updated 5 months ago
- ☆89Updated last month
- Official repository for "iVideoGPT: Interactive VideoGPTs are Scalable World Models" (NeurIPS 2024), https://arxiv.org/abs/2405.15223☆145Updated 3 months ago
- ☆55Updated 9 months ago
- ☆72Updated 9 months ago
- Evaluate Multimodal LLMs as Embodied Agents☆54Updated 7 months ago