zubair-irshad / Awesome-Robotics-3DLinks
A curated list of 3D Vision papers relating to Robotics domain in the era of large models i.e. LLMs/VLMs, inspired by awesome-computer-vision, including papers, codes, and related websites
☆752Updated last month
Alternatives and similar repositories for Awesome-Robotics-3D
Users that are interested in Awesome-Robotics-3D are comparing it to the libraries listed below
Sorting:
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆556Updated 9 months ago
- ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation☆822Updated 6 months ago
- Paper list in the survey paper: Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis☆442Updated 7 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆732Updated 4 months ago
- A curated list of state-of-the-art research in embodied AI, focusing on vision-language-action (VLA) models, vision-language navigation (…☆1,382Updated last week
- Re-implementation of pi0 vision-language-action (VLA) model from Physical Intelligence☆1,101Updated 6 months ago
- RoboCasa: Large-Scale Simulation of Everyday Tasks for Generalist Robots☆894Updated this week
- RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning☆1,389Updated this week
- [RSS 2024] 3D Diffusion Policy: Generalizable Visuomotor Policy Learning via Simple 3D Representations☆1,009Updated last month
- BEHAVIOR-1K: a platform for accelerating Embodied AI research. Join our Discord for support: https://discord.gg/bccR5vGFEx☆741Updated this week
- VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models☆723Updated 6 months ago
- A paper list of my history reading. Robotics, Learning, Vision.☆431Updated last month
- Official code release for ConceptGraphs☆655Updated 7 months ago
- A curated list of awesome papers on Embodied AI and related research/industry-driven resources.☆464Updated 2 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆613Updated 4 months ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆677Updated this week
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆452Updated 2 months ago
- Octo is a transformer-based robot policy trained on a diverse mix of 800k robot trajectories.☆1,334Updated last year