[ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction
☆116Apr 14, 2025Updated 11 months ago
Alternatives and similar repositories for otter
Users that are interested in otter are comparing it to the libraries listed below
Sorting:
- ☆27Mar 6, 2025Updated last year
- Code for Point Policy: Unifying Observations and Actions with Key Points for Robot Manipulation☆89Jul 21, 2025Updated 8 months ago
- ☆14Feb 13, 2025Updated last year
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆1,094Sep 9, 2025Updated 6 months ago
- [ICRA 2025] In-Context Imitation Learning via Next-Token Prediction☆109Mar 17, 2025Updated last year
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆350Jul 23, 2025Updated 7 months ago
- ☆69Jan 8, 2025Updated last year
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆673Jun 23, 2025Updated 8 months ago
- Interactive Post-Training for Vision-Language-Action Models☆163Jun 4, 2025Updated 9 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆485Jan 22, 2025Updated last year
- ☆61Jul 15, 2025Updated 8 months ago
- [ICLR 2025🎉] This is the official implementation of paper "Robots Pre-Train Robots: Manipulation-Centric Robotic Representation from Lar…☆93Jan 22, 2025Updated last year
- ☆96Sep 4, 2024Updated last year
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆79Dec 12, 2024Updated last year
- ☆76Oct 18, 2024Updated last year
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆350Mar 19, 2025Updated last year
- ☆278Aug 26, 2024Updated last year
- Efficiently apply modification functions to RLDS/TFDS datasets.☆42Jun 5, 2024Updated last year
- ICCV2025☆163Dec 10, 2025Updated 3 months ago
- [ICLR25] BID-Robot☆65Oct 19, 2025Updated 5 months ago
- ☆79Aug 29, 2025Updated 6 months ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆405Nov 11, 2025Updated 4 months ago
- Official code for "Behavior Generation with Latent Actions" (ICML 2024 Spotlight)☆200Feb 28, 2024Updated 2 years ago
- ☆23Jun 14, 2025Updated 9 months ago
- Official Algorithm Codebase for the Paper "BEHAVIOR Robot Suite: Streamlining Real-World Whole-Body Manipulation for Everyday Household A…☆165Aug 24, 2025Updated 6 months ago
- Repo for Bring Your Own Vision-Language-Action (VLA) model, arxiv 2024☆37Jan 22, 2025Updated last year
- Re-implementation of pi0 vision-language-action (VLA) model from Physical Intelligence☆1,424Jan 31, 2025Updated last year
- ☆90Sep 23, 2025Updated 5 months ago
- ☆12Nov 18, 2023Updated 2 years ago
- A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation☆406Oct 30, 2025Updated 4 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆217May 30, 2025Updated 9 months ago
- This is the official implementation of Video Generation part of This&That: Language-Gesture Controlled Video Generation for Robot Plannin…☆49Dec 19, 2025Updated 3 months ago
- [SIGGRAPH Asia 2024 Conference] PC-Planner: Physics-Constrained Self-Supervised Learning for Robust Neural Motion Planning with Shape-Awa…☆18Updated this week
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆622Oct 29, 2024Updated last year
- RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation☆1,650Jan 21, 2026Updated 2 months ago
- [CVPR 2025] The offical Implementation of "Universal Actions for Enhanced Embodied Foundation Models"☆233Nov 6, 2025Updated 4 months ago
- [ECCV 2024] 🎉 Official repository of "Robo-ABC: Affordance Generalization Beyond Categories via Semantic Correspondence for Robot Manipu…☆99Nov 26, 2024Updated last year
- [CoRL25] GraspVLA: a Grasping Foundation Model Pre-trained on Billion-scale Synthetic Action Data☆349Dec 29, 2025Updated 2 months ago
- [RSS 2024] Code for "Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals" for CALVIN experiments with pre…☆169Oct 16, 2024Updated last year