jjjllxx / NUS-EE5904-ME5404-Neural-Network-ProjectsLinks
NUS Neural Networks (EE5904/ME5404) Semester2 AY21/22, including all the 5 projects and solutions
☆37Updated last year
Alternatives and similar repositories for NUS-EE5904-ME5404-Neural-Network-Projects
Users that are interested in NUS-EE5904-ME5404-Neural-Network-Projects are comparing it to the libraries listed below
Sorting:
- ☆300Updated 4 months ago
- Lumina Robotics Talent Call | Lumina社区具身智能招贤榜 | A list for Embodied AI / Robotics Jobs (PhD, RA, intern, full-time, etc☆933Updated this week
- An Introduction to Embodied Intelligence (A Quick Guide of Embodied-AI) (Updating)☆134Updated 4 months ago
- A curated list of 3D Vision papers relating to Robotics domain in the era of large models i.e. LLMs/VLMs, inspired by awesome-computer-vi…☆754Updated last month
- Awesome Embodied Navigation: Concept, Paradigm and State-of-the-arts☆153Updated 9 months ago
- [Actively Maintained🔥] A list of Embodied AI papers accepted by top conferences (ICLR, NeurIPS, ICML, RSS, CoRL, ICRA, IROS, CVPR, ICCV,…☆360Updated last month
- It's not a list of papers, but a list of paper reading lists...☆224Updated 4 months ago
- ICRA2024 Paper List☆569Updated 11 months ago
- Paper list in the survey: A Survey on Vision-Language-Action Models: An Action Tokenization Perspective☆228Updated 2 months ago
- RoboScholar: A Comprehensive Paper List of Embodied AI and Robotics Research☆150Updated last week
- A curated list of awesome papers on Embodied AI and related research/industry-driven resources.☆464Updated 3 months ago
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆559Updated 10 months ago
- ☆424Updated last year
- ☆261Updated this week
- A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and A…☆387Updated this week
- A curated list of large VLM-based VLA models for robotic manipulation.☆102Updated this week
- ☆250Updated 3 months ago
- Official code for the CVPR 2025 paper "Navigation World Models".☆374Updated 3 weeks ago
- DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World Knowledge☆167Updated last week
- SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation☆188Updated 2 months ago
- [TMLR 2024] repository for VLN with foundation models☆160Updated last month
- ICRA2025 Paper List☆263Updated 3 months ago
- ☆154Updated 3 weeks ago
- The repository provides code associated with the paper VLFM: Vision-Language Frontier Maps for Zero-Shot Semantic Navigation (ICRA 2024)☆554Updated 7 months ago
- Official implementation of "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics"☆145Updated last month
- [NeurIPS 2024] SG-Nav: Online 3D Scene Graph Prompting for LLM-based Zero-shot Object Navigation☆237Updated 6 months ago
- [ICML 2024] Official code repository for 3D embodied generalist agent LEO☆457Updated 4 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆472Updated 2 months ago
- The official repo for "SpatialBot: Precise Spatial Understanding with Vision Language Models.☆301Updated 3 months ago
- Latest Advances on Embodied Multimodal LLMs (or Vison-Language-Action Models).☆120Updated last year