LINs-lab / cluster_tutorialLinks
☆16Updated 5 months ago
Alternatives and similar repositories for cluster_tutorial
Users that are interested in cluster_tutorial are comparing it to the libraries listed below
Sorting:
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆595Updated this week
- a brief repo about paper research☆15Updated last year
- ☆104Updated 2 weeks ago
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆344Updated last month
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆312Updated this week
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆331Updated 3 months ago
- A collection of paper/projects that trains flow matching model/policies via RL.☆325Updated last week
- Team Comet's 2025 BEHAVIOR Challenge Codebase☆147Updated this week
- ☆387Updated this week
- SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning☆1,109Updated 2 months ago
- It's not a list of papers, but a list of paper reading lists...☆238Updated 7 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆236Updated 2 months ago
- A tiny paper rating web☆38Updated 9 months ago
- [PKU EPIC Lab] 面向小白的具身智能入门指南☆607Updated 2 weeks ago
- 复旦创智OpenMOSS实验室具身智能入门练习 The Embodied Intelligence Introductory Practice of OpenMOSS Lab (Fudan&SII)☆39Updated last week
- Single-file implementation to advance vision-language-action (VLA) models with reinforcement learning.☆365Updated last month
- This repository summarizes recent advances in the VLA + RL paradigm and provides a taxonomic classification of relevant works.☆369Updated 2 months ago
- Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspective☆368Updated 5 months ago
- Dexbotic: Open-Source Vision-Language-Action Toolbox☆609Updated 2 weeks ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆325Updated 2 months ago
- https://arxiv.org/pdf/2506.06677☆40Updated last month
- Official repository of LIBERO-plus, a generalized benchmark for in-depth robustness analysis of vision-language-action models.☆134Updated last month
- [NeurIPS 2025 Spotlight] SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆213Updated 5 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆424Updated 10 months ago
- ☆213Updated 3 months ago
- [ICCV 2025] RoboFactory: Exploring Embodied Agent Collaboration with Compositional Constraints☆97Updated 3 months ago
- Official code of Motus☆56Updated this week
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆305Updated 4 months ago
- VLA-Arena is an open-source benchmark for systematic evaluation of Vision-Language-Action (VLA) models.☆72Updated last week
- A curated list of large VLM-based VLA models for robotic manipulation.☆285Updated last month