RobotiXX / multimodal-fusion-networkLinks
This repository contains all the code for Parsing, Transforming and Training Multimodal Deep Learning Network, for Social Robot Navigation.
☆15Updated 2 years ago
Alternatives and similar repositories for multimodal-fusion-network
Users that are interested in multimodal-fusion-network are comparing it to the libraries listed below
Sorting:
- ☆10Updated 2 years ago
- Scalable Autonomous Control for Social Navigation☆23Updated last year
- (ICRA 2024) SCALE: Self-Correcting Visual Navigation for Mobile Robots via Anti-Novelty Estimation☆11Updated 9 months ago
- ☆28Updated 2 years ago
- ☆44Updated 8 months ago
- PyTorch implementation of "Learning Where to See for Navigation: A Self-Supervised Vision-Action Pre-Training Approach"☆19Updated last year
- [RA-L'22] Proactive Anomaly Detection for Robot Navigation with Multi-Sensor Fusion☆18Updated 2 years ago
- [IROS 2024] HabiCrowd: A High Performance Simulator for Crowd-Aware Visual Navigation☆35Updated 2 years ago
- ☆14Updated last year
- Last-Mile Embodied Visual Navigation https://jbwasse2.github.io/portfolio/SLING/☆28Updated 2 years ago
- [CoRL 2023] SOGMP++/SOGMP: Stochastic Occupancy Grid Map Prediction in Dynamic Scenes☆32Updated 2 months ago
- ☆28Updated 3 years ago
- Track 2: Social Navigation☆20Updated last month
- Occlusion-Aware Crowd Navigation Using People as Sensors: ICRA2023☆53Updated last year
- Official code release for "Navigation with Large Language Models: Semantic Guesswork as a Heuristic for Planning"