allenai / molmoactLinks
Official Repository for MolmoAct
☆281Updated last month
Alternatives and similar repositories for molmoact
Users that are interested in molmoact are comparing it to the libraries listed below
Sorting:
- Unfied World Models: Coupling Video and Action Diffusion for Pretraining on Large Robotic Datasets☆175Updated 3 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆314Updated 5 months ago
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction☆114Updated 9 months ago
- Team Comet's 2025 BEHAVIOR Challenge Codebase☆195Updated last week
- VLAC: A Vision-Language-Action-Critic Model for Robotic Real-World Reinforcement Learning☆258Updated 3 months ago
- InternVLA-A1: Unifying Understanding, Generation, and Action for Robotic Manipulation☆226Updated this week
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆203Updated 4 months ago
- Code for subgoal synthesis via image editing☆144Updated 2 years ago
- Official Repository for SAM2Act☆220Updated 4 months ago
- A Benchmark for Low-Level Manipulation in Home Rearrangement Tasks☆170Updated 3 weeks ago
- AutoEval: Autonomous Evaluation of Generalist Robot Manipulation Policies in the Real World | CoRL 2025☆91Updated last week
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actions☆153Updated last week
- ☆153Updated last year
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆154Updated 9 months ago
- VLA-0: Building State-of-the-Art VLAs with Zero Modification☆423Updated this week
- ☆64Updated last year
- ☆261Updated last year
- Official implementation of "Data Scaling Laws in Imitation Learning for Robotic Manipulation"☆197Updated last year
- [ICRA 2025] In-Context Imitation Learning via Next-Token Prediction☆105Updated 9 months ago
- A Vision-Language Model for Spatial Affordance Prediction in Robotics☆209Updated 5 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆438Updated 11 months ago
- Interactive Post-Training for Vision-Language-Action Models☆157Updated 7 months ago
- [ICLR 2025 Oral] Seer: Predictive Inverse Dynamics Models are Scalable Learners for Robotic Manipulation☆273Updated 6 months ago
- [RSS 2024] Code for "Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals" for CALVIN experiments with pre…☆167Updated last year
- Official codebase for "Any-point Trajectory Modeling for Policy Learning"☆268Updated 6 months ago
- Official Reporsitory of "RoboEngine: Plug-and-Play Robot Data Augmentation with Semantic Robot Segmentation and Background Generation"☆147Updated 7 months ago
- Theia: Distilling Diverse Vision Foundation Models for Robot Learning☆265Updated 2 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆324Updated 9 months ago
- Nvidia GEAR Lab's initiative to solve the robotics data problem using world models☆436Updated 2 months ago
- ☆53Updated 5 months ago