THU-RCSCT / vlsa-aegisLinks
VLSA: Vision-Language-Action Models with Plug-and-Play Safety Constraint Layer
☆26Updated this week
Alternatives and similar repositories for vlsa-aegis
Users that are interested in vlsa-aegis are comparing it to the libraries listed below
Sorting:
- An example RLDS dataset builder for X-embodiment dataset conversion.☆54Updated 9 months ago
- [NeurIPS 2025] VIKI‑R: Coordinating Embodied Multi-Agent Cooperation via Reinforcement Learning☆64Updated last week
- [NeurIPS 2025 Spotlight] Towards Safety Alignment of Vision-Language-Action Model via Constrained Learning.☆94Updated last week
- Official repo of VLABench, a large scale benchmark designed for fairly evaluating VLA, Embodied Agent, and VLMs.☆350Updated last month
- LIBERO-PRO is the official repository of the LIBERO-PRO — an evaluation extension of the original LIBERO benchmark☆128Updated last week
- [NeurIPS 24] Incremental Learning of Retrievable Skills For Efficient Continual Task Adaptation☆20Updated 2 months ago
- https://arxiv.org/pdf/2506.06677☆41Updated last month
- Official repo of Exploring the Adversarial Vulnerabilities of Vision-Language-Action Models in Robotics☆55Updated 4 months ago
- [AAAI 2026] Official code for MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Man…☆59Updated 4 months ago
- MM-ACT: Learn from Multimodal Parallel Generation to Act☆83Updated last week
- Official Implementation of FLARE (AAAI'25 Oral)☆28Updated 3 weeks ago
- Official code of paper "DeeR-VLA: Dynamic Inference of Multimodal Large Language Models for Efficient Robot Execution"☆120Updated 10 months ago
- [NeurIPS 2025] VLA-Cache: Towards Efficient Vision-Language-Action Model via Adaptive Token Caching in Robotic Manipulation☆54Updated 3 months ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆153Updated 8 months ago
- [ICML 2025 Oral] Official repo of EmbodiedBench, a comprehensive benchmark designed to evaluate MLLMs as embodied agents.☆243Updated 2 months ago
- ICCV2025☆145Updated 2 weeks ago
- StarVLA: A Lego-like Codebase for Vision-Language-Action Model Developing☆595Updated last week
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆344Updated 8 months ago
- OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation☆331Updated 3 months ago
- The official codebase for ManipLLM: Embodied Multimodal Large Language Model for Object-Centric Robotic Manipulation(cvpr 2024)☆144Updated last year
- A Survey on Reinforcement Learning of Vision-Language-Action Models for Robotic Manipulation☆359Updated this week
- EVOLVE-VLA: Test-Time Training from Environment Feedback for Vision-Language-Action Models☆33Updated last week
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo, and OpenVLA) in simulation under common setu…☆251Updated 6 months ago
- official repo for AGNOSTOS, a cross-task manipulation benchmark, and X-ICM method, a cross-task in-context manipulation (VLA) method☆52Updated last month
- Dynamic Mixture of Progressive Parameter-Efficient Expert Library for Lifelong Robot Learning☆26Updated 5 months ago
- Code of "MemoryVLA: Perceptual-Cognitive Memory in Vision-Language-Action Models for Robotic Manipulation"☆108Updated 3 weeks ago
- The repo of paper `RoboMamba: Multimodal State Space Model for Efficient Robot Reasoning and Manipulation`☆147Updated last year
- ☆405Updated this week
- An official implementation of Touch100k: A Large-Scale Touch-Language-Vision Dataset for Touch-Centric Multimodal Representation☆30Updated last year
- A comprehensive list of papers about dual-system VLA models, including papers, codes, and related websites.☆90Updated last month