Max-Fu / otterLinks
[ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction
☆105Updated 5 months ago
Alternatives and similar repositories for otter
Users that are interested in otter are comparing it to the libraries listed below
Sorting:
- ICCV2025☆134Updated last month
- ☆56Updated 9 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆141Updated 5 months ago
- Official Repository for SAM2Act☆162Updated last month
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videos☆157Updated last month
- ☆35Updated last month
- Official implementation of the paper: Task Reconstruction and Extrapolation for $\pi_0$ using Text Latent (https://arxiv.org/pdf/2505.035…☆79Updated 2 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"☆185Updated 4 months ago
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasks☆73Updated 9 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆274Updated 2 months ago
- Official Repository for MolmoAct☆199Updated 3 weeks ago
- [RSS 2024] Code for "Multimodal Diffusion Transformer: Learning Versatile Behavior from Multimodal Goals" for CALVIN experiments with pre…☆153Updated 11 months ago
- [ICRA 2025] In-Context Imitation Learning via Next-Token Prediction☆93Updated 6 months ago
- ☆80Updated last year
- ☆56Updated 8 months ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆83Updated last month
- Unfied World Models: Coupling Video and Action Diffusion for Pretraining on Large Robotic Datasets☆132Updated last week
- Official implemetation of the paper "InSpire: Vision-Language-Action Models with Intrinsic Spatial Reasoning"☆45Updated this week
- Efficiently apply modification functions to RLDS/TFDS datasets.☆33Updated last year
- Reimplementation of GR-1, a generalized policy for robotics manipulation.☆143Updated last year
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videos☆134Updated this week
- [CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`