ustcwhy / BitVLALinks
Official implementation for BitVLA: 1-bit Vision-Language-Action Models for Robotics Manipulation
☆73Updated last month
Alternatives and similar repositories for BitVLA
Users that are interested in BitVLA are comparing it to the libraries listed below
Sorting:
- NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks☆168Updated last month
- 🦾 A Dual-System VLA with System2 Thinking☆99Updated last week
- Official implementation of CEED-VLA: Consistency Vision-Language-Action Model with Early-Exit Decoding.☆29Updated last month
- Official implementation of Chain-of-Action: Trajectory Autoregressive Modeling for Robotic Manipulation☆61Updated last month
- InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆39Updated last month
- The offical repo for paper "VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers" (ICCV 2025)☆72Updated 3 weeks ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoning☆73Updated 3 months ago
- Source codes for the paper "MindJourney: Test-Time Scaling with World Models for Spatial Reasoning"☆79Updated last month
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆51Updated last week
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆132Updated 8 months ago
- Nvidia GEAR Lab's initiative to solve the robotics data problem using world models☆289Updated last week
- ☆78Updated 11 months ago
- Distributed, scalable benchmarking of generalist robot policies.☆48Updated 2 months ago
- Unified Vision-Language-Action Model☆185Updated last month
- ICCV2025☆114Updated last week
- Unfied World Models: Coupling Video and Action Diffusion for Pretraining on Large Robotic Datasets☆119Updated last month
- [CVPR 2025] Source codes for the paper "3D-Mem: 3D Scene Memory for Embodied Exploration and Reasoning"☆171Updated 2 months ago
- ☆55Updated 6 months ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆134Updated this week
- ✨✨Official implementation of BridgeVLA☆120Updated 2 months ago
- ☆55Updated 7 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆72Updated 8 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆178Updated 3 months ago
- PhysVLM: Enabling Visual Language Models to Understand Robotic Physical Reachability☆24Updated 5 months ago
- WorldVLA: Towards Autoregressive Action World Model☆363Updated last month
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks☆163Updated 3 months ago
- A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation☆37Updated 5 months ago
- [ICML'25] The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".☆147Updated 2 months ago
- AutoEval: Autonomous Evaluation of Generalist Robot Manipulation Policies in the Real World☆79Updated 2 months ago
- Official Repository for MolmoAct☆109Updated last week