hume-vla / humeLinks
π¦Ύ A Dual-System VLA with System2 Thinking
β122Updated 3 months ago
Alternatives and similar repositories for hume
Users that are interested in hume are comparing it to the libraries listed below
Sorting:
- InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulationβ82Updated 2 months ago
- ICCV2025β143Updated 3 weeks ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimizationβ152Updated 8 months ago
- Interactive Post-Training for Vision-Language-Action Modelsβ154Updated 6 months ago
- Official implementation of "OneTwoVLA: A Unified Vision-Language-Action Model with Adaptive Reasoning"β203Updated 6 months ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"β108Updated 3 months ago
- Emma-X: An Embodied Multimodal Action Model with Grounded Chain of Thought and Look-ahead Spatial Reasoningβ78Updated 6 months ago
- F1: A Vision Language Action Model Bridging Understanding and Generation to Actionsβ138Updated last month
- Official implementation of Chain-of-Action: Trajectory Autoregressive Modeling for Robotic Manipulation. Accepted in NeurIPS 2025.β84Updated last month
- Code for FLIP: Flow-Centric Generative Planning for General-Purpose Manipulation Tasksβ80Updated last year
- [ICML 2025] OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extractionβ111Updated 7 months ago
- Official Code For VLA-OS.β128Updated 5 months ago
- Official implementation of the paper: Task Reconstruction and Extrapolation for $\pi_0$ using Text Latent (https://arxiv.org/pdf/2505.035β¦β92Updated 4 months ago
- VLA-RFT: Vision-Language-Action Models with Reinforcement Fine-Tuningβ96Updated 2 months ago
- The offical repo for paper "VQ-VLA: Improving Vision-Language-Action Models via Scaling Vector-Quantized Action Tokenizers" (ICCV 2025)β101Updated 3 weeks ago
- Being-H0: Vision-Language-Action Pretraining from Large-Scale Human Videosβ186Updated 3 months ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledgeβ245Updated 2 months ago
- A comprehensive list of papers about dual-system VLA models, including papers, codes, and related websites.β86Updated 3 weeks ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policyβ305Updated last month
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`β143Updated 11 months ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintainedπ₯]β173Updated last month
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Modelβ324Updated 2 months ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"β120Updated 9 months ago
- β60Updated 11 months ago
- β98Updated last month
- β61Updated 11 months ago
- [ICCV2025 Oral] Latent Motion Token as the Bridging Language for Learning Robot Manipulation from Videosβ151Updated 2 months ago
- RynnVLA-001: Using Human Demonstrations to Improve Robot Manipulationβ268Updated last week
- Code Repository for ControlVLA, CoRL2025.β77Updated last month
- β61Updated 9 months ago