OpenGVLab / VeBrainLinks
Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces
☆84Updated 4 months ago
Alternatives and similar repositories for VeBrain
Users that are interested in VeBrain are comparing it to the libraries listed below
Sorting:
- [Nips 2025] EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆121Updated 2 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆74Updated 5 months ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆89Updated 2 months ago
- InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation☆50Updated last month
- OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models☆70Updated last month
- [NeurIPS 2025] OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding☆64Updated last month
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuning☆63Updated 3 weeks ago
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆176Updated 3 weeks ago
- ☆41Updated 4 months ago
- ☆60Updated 8 months ago
- ☆81Updated last year
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆73Updated 4 months ago
- Unified Vision-Language-Action Model☆213Updated 2 weeks ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆133Updated last year
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆61Updated 7 months ago
- 🦾 A Dual-System VLA with System2 Thinking☆114Updated 2 months ago
- Multi-SpatialMLLM Multi-Frame Spatial Understanding with Multi-Modal Large Language Models☆157Updated 2 weeks ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆163Updated last month
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆137Updated 3 weeks ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆91Updated 7 months ago
- ☆51Updated last year
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆191Updated 2 weeks ago
- [NeurIPS 2024] Official code repository for MSR3D paper☆67Updated 3 months ago
- [IJCV] EgoPlan-Bench: Benchmarking Multimodal Large Language Models for Human-Level Planning☆74Updated 10 months ago
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆80Updated last year
- ☆94Updated 3 weeks ago
- Egocentric Video Understanding Dataset (EVUD)☆31Updated last year
- Source codes for the paper "MindJourney: Test-Time Scaling with World Models for Spatial Reasoning"☆85Updated 3 months ago
- The offical repo for paper "VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers" (ICCV 2025)☆86Updated 2 months ago
- Code for Stable Control Representations☆26Updated 6 months ago