NVIDIA-AI-IOT / remembrLinks
β231Updated 2 months ago
Alternatives and similar repositories for remembr
Users that are interested in remembr are comparing it to the libraries listed below
Sorting:
- β246Updated 9 months ago
- π₯ SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.β326Updated last week
- The repository provides code associated with the paper VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation (ICRA 2024)β471Updated 4 months ago
- Low-level locomotion policy training in Isaac Labβ216Updated 2 months ago
- Vision-Language Navigation Benchmark in Isaac Labβ179Updated this week
- UMI on Legs: Making Manipulation Policies Mobile with Manipulation-Centric Whole-body Controllersβ345Updated 9 months ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actionsβ351Updated this week
- Helpful DoggyBot: Open-World Object Fetching using Legged Robots and Vision-Language Modelsβ106Updated 8 months ago
- ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulationβ780Updated 3 months ago
- ViPlanner: Visual Semantic Imperative Learning for Local Navigationβ484Updated 3 months ago
- End-to-End Navigation with VLMsβ81Updated last month
- β157Updated last month
- β195Updated last week
- PyTorch implementation of YAY Robotβ140Updated last year
- β233Updated 4 months ago
- Paper list in the survey paper: Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysisβ432Updated 4 months ago
- β156Updated this week
- The unitree_il_lerobot open-source project is a modification of the LeRobot open-source training framework, enabling the training and tesβ¦β313Updated last week
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Gooβ¦β640Updated 2 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Successβ415Updated last month
- Unitree Go2 simulation platform for testing navigation, decision-making and autonomous tasks. (NVIDIA Isaac/ROS2)β257Updated 2 months ago
- Use LLM to understand user input and control the robot under ROSβ31Updated last year
- [RSS 2024] "DexCap: Scalable and Portable Mocap Data Collection System for Dexterous Manipulation" code repositoryβ298Updated 7 months ago
- Full Autonomy Stack for Unitree Go2β186Updated 2 months ago
- Train a loco-manipulation dog with RLβ238Updated 9 months ago
- Distributed Robot Interaction Dataset.β194Updated 2 months ago
- β354Updated 4 months ago
- VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Modelsβ697Updated 3 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.β254Updated 2 months ago
- β129Updated 2 months ago