ZGC-EmbodyAI / LangForceLinks
☆34Updated last week
Alternatives and similar repositories for LangForce
Users that are interested in LangForce are comparing it to the libraries listed below
Sorting:
- EVOLVE-VLA: Test-Time Training from Environment Feedback for Vision-Language-Action Models☆69Updated last month
- [NeurIPS 2025] VIKI‑R: Coordinating Embodied Multi-Agent Cooperation via Reinforcement Learning☆70Updated last month
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆157Updated 10 months ago
- MM-ACT: Learn from Multimodal Parallel Generation to Act☆93Updated 2 weeks ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆381Updated 3 months ago
- Official code for "Embodied-R1: Reinforced Embodied Reasoning for General Robotic Manipulation"☆122Updated 5 months ago
- Implementation of VLM4VLA☆115Updated last week
- [ICCV 2025] RoboFactory: Exploring Embodied Agent Collaboration with Compositional Constraints☆106Updated 5 months ago
- Official Implementation of Paper: WMPO: World Model-based Policy Optimization for Vision-Language-Action Models☆146Updated last month
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆363Updated last month
- A collection of vision-language-action model post-training methods.☆125Updated last week
- VLA-Arena is an open-source benchmark for systematic evaluation of Vision-Language-Action (VLA) models.☆109Updated 3 weeks ago
- Embodied-Reasoner: Synergizing Visual Search, Reasoning, and Action for Embodied Interactive Tasks☆186Updated 4 months ago
- [ICLR 2026] InstructVLA: Vision-Language-Action Instruction Tuning from Understanding to Manipulation☆96Updated 2 weeks ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆262Updated 3 months ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆336Updated 4 months ago
- Official implementation of Chain-of-Action: Trajectory Autoregressive Modeling for Robotic Manipulation. Accepted in NeurIPS 2025.☆98Updated 2 months ago
- [NeurIPS 2025] DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆286Updated last month
- [ICLR 2026] Code of "MemoryVLA: Perceptual-Cognitive Memory in Vision-Language-Action Models for Robotic Manipulation"☆145Updated last week
- ICCV2025☆153Updated 2 months ago
- [AAAI26 oral] CronusVLA: Towards Efficient and Robust Manipulation via Multi-Frame Vision-Language-Action Modeling☆87Updated last month
- 🦾 A Dual-System VLA with System2 Thinking☆132Updated 5 months ago
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆225Updated 7 months ago
- ☆56Updated 6 months ago
- Official repository of LIBERO-plus, a generalized benchmark for in-depth robustness analysis of vision-language-action models.☆212Updated 3 weeks ago
- Evo-1: Lightweight Vision-Language-Action Model with Preserved Semantic Alignment☆213Updated last month
- [NeurIPS 2025] Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆228Updated last month
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆399Updated 3 months ago
- RoboChallenge Inference example code☆106Updated 3 weeks ago
- LLaVA-VLA: A Simple Yet Powerful Vision-Language-Action Model [Actively Maintained🔥]☆175Updated 3 months ago