OpenGVLab / VeBrainLinks
Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces
☆75Updated last month
Alternatives and similar repositories for VeBrain
Users that are interested in VeBrain are comparing it to the libraries listed below
Sorting:
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆109Updated 8 months ago
- Unified Vision-Language-Action Model☆128Updated 2 weeks ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆68Updated 2 months ago
- DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆71Updated this week
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆76Updated 4 months ago
- ☆37Updated last month
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆60Updated 3 months ago
- OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding☆41Updated this week
- ☆49Updated 7 months ago
- ☆75Updated 10 months ago
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆96Updated last week
- FleVRS: Towards Flexible Visual Relationship Segmentation, NeurIPS 2024☆21Updated 7 months ago
- Multi-SpatialMLLM Multi-Frame Spatial Understanding with Multi-Modal Large Language Models☆135Updated last month
- OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models☆48Updated 2 weeks ago
- [ECCV 2024] M3DBench introduces a comprehensive 3D instruction-following dataset with support for interleaved multi-modal prompts.☆60Updated 9 months ago
- High-Resolution Visual Reasoning via Multi-Turn Grounding-Based Reinforcement Learning☆29Updated last week
- ☆69Updated 2 weeks ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated last year
- [ICCV 2025] Latent Motion Token as the Bridging Language for Robot Manipulation☆110Updated 2 months ago
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆69Updated last week
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆78Updated last month
- ☆55Updated 4 months ago
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆50Updated 2 months ago
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆57Updated 9 months ago
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆116Updated last year
- ☆70Updated 7 months ago
- WorldVLA: Towards Autoregressive Action World Model☆268Updated last week
- [CVPR 2025] Official PyTorch Implementation of GLUS: Global-Local Reasoning Unified into A Single Large Language Model for Video Segmenta…☆45Updated 3 weeks ago
- ☆22Updated 4 months ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆64Updated last month