zubair-irshad / Awesome-Robotics-3DLinks
A curated list of 3D Vision papers relating to Robotics domain in the era of large models i.e. LLMs/VLMs, inspired by awesome-computer-vision, including papers, codes, and related websites
☆779Updated 4 months ago
Alternatives and similar repositories for Awesome-Robotics-3D
Users that are interested in Awesome-Robotics-3D are comparing it to the libraries listed below
Sorting:
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆602Updated last year
- ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation☆879Updated 9 months ago
- Paper list in the survey paper: Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysis☆448Updated 3 weeks ago
- Official code release for ConceptGraphs☆730Updated last month
- A curated list of awesome papers on Embodied AI and related research/industry-driven resources.☆487Updated 6 months ago
- BEHAVIOR-1K: a platform for accelerating Embodied AI research. Join our Discord for support: https://discord.gg/bccR5vGFEx☆1,191Updated this week
- [RSS 2024] 3D Diffusion Policy: Generalizable Visuomotor Policy Learning via Simple 3D Representations☆1,160Updated last month
- RoboCasa: Large-Scale Simulation of Everyday Tasks for Generalist Robots☆1,023Updated 3 months ago
- VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models☆755Updated 9 months ago
- RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning☆1,545Updated this week
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆874Updated 8 months ago
- Re-implementation of pi0 vision-language-action (VLA) model from Physical Intelligence☆1,294Updated 10 months ago
- A paper list of my history reading. Robotics, Learning, Vision.☆473Updated last month
- A Survey of Embodied Learning for Object-Centric Robotic Manipulation☆246Updated last year
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆584Updated 5 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆890Updated 3 months ago
- A comprehensive list of papers about Robot Manipulation, including papers, codes, and related websites.☆727Updated 3 weeks ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆873Updated 3 weeks ago
- Mobile manipulation research tools for roboticists☆1,160Updated last year
- ☆351Updated this week
- A curated list of state-of-the-art research in embodied AI, focusing on vision-language-action (VLA) models, vision-language navigation (…☆2,137Updated last week
- [Actively Maintained🔥] A list of Embodied AI papers accepted by top conferences (ICLR, NeurIPS, ICML, RSS, CoRL, ICRA, IROS, CVPR, ICCV,…☆434Updated last week
- Benchmarking Knowledge Transfer in Lifelong Robot Learning☆1,242Updated 8 months ago
- ☆1,315Updated last year
- ☆380Updated last month
- ☆1,550Updated last month
- ☆414Updated 2 weeks ago
- [ICML 2024] Official code repository for 3D embodied generalist agent LEO☆468Updated 7 months ago
- 🎁 A collection of utilities for LeRobot.☆702Updated last week
- A simulation platform for versatile Embodied AI research and developments.☆1,128Updated 3 months ago