OpenGVLab / VeBrainLinks
Visual Embodied Brain: Let Multimodal Large Language Models See, Think, and Control in Spaces
☆64Updated 3 weeks ago
Alternatives and similar repositories for VeBrain
Users that are interested in VeBrain are comparing it to the libraries listed below
Sorting:
- Unified Vision-Language-Action Model☆61Updated this week
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆108Updated 7 months ago
- [arXiv: 2502.05178] QLIP: Text-Aligned Visual Tokenization Unifies Auto-Regressive Multimodal Understanding and Generation☆75Updated 3 months ago
- ☆49Updated 6 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆68Updated last month
- Multi-SpatialMLLM Multi-Frame Spatial Understanding with Multi-Modal Large Language Models☆123Updated last month
- ☆37Updated 2 weeks ago
- ☆38Updated last week
- A list of works on video generation towards world model☆151Updated this week
- ☆74Updated 9 months ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆64Updated 3 weeks ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆67Updated 6 months ago
- [ICLR 2025 Spotlight] Grounding Video Models to Actions through Goal Conditioned Exploration☆48Updated last month
- Latent Motion Token as the Bridging Language for Robot Manipulation☆105Updated last month
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆65Updated this week
- [NeurIPS 2024] Official code repository for MSR3D paper☆60Updated last week
- SpaceR: The first MLLM empowered by SG-RLVR for video spatial reasoning☆63Updated 2 weeks ago
- ☆49Updated 8 months ago
- [ICML 2024] A Touch, Vision, and Language Dataset for Multimodal Alignment☆78Updated 3 weeks ago
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆74Updated 8 months ago
- IMProv: Inpainting-based Multimodal Prompting for Computer Vision Tasks☆57Updated 9 months ago
- [CVPR'24 Highlight] The official code and data for paper "EgoThink: Evaluating First-Person Perspective Thinking Capability of Vision-Lan…☆59Updated 3 months ago
- Can 3D Vision-Language Models Truly Understand Natural Language?☆21Updated last year
- OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models☆40Updated 3 weeks ago
- Official code for MotionBench (CVPR 2025)☆45Updated 3 months ago
- [ECCV 2024] M3DBench introduces a comprehensive 3D instruction-following dataset with support for interleaved multi-modal prompts.☆60Updated 8 months ago
- [CVPR 25] A framework named B^2-DiffuRL for RL-based diffusion model fine-tuning.☆30Updated 2 months ago
- ☆25Updated last year
- HandsOnVLM: Vision-Language Models for Hand-Object Interaction Prediction☆32Updated 6 months ago
- [CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning☆39Updated 6 months ago