RobotiXX / multimodal-fusion-networkLinks
This repository contains all the code for Parsing, Transforming and Training Multimodal Deep Learning Network, for Social Robot Navigation.
☆15Updated 2 years ago
Alternatives and similar repositories for multimodal-fusion-network
Users that are interested in multimodal-fusion-network are comparing it to the libraries listed below
Sorting:
- Scalable Autonomous Control for Social Navigation☆26Updated last year
- ☆11Updated 2 years ago
- Last-Mile Embodied Visual Navigation https://jbwasse2.github.io/portfolio/SLING/☆28Updated 3 years ago
- ☆28Updated 3 years ago
- Official code release for "Navigation with Large Language Models: Semantic Guesswork as a Heuristic for Planning"☆56Updated 2 years ago
- PyTorch implementation of "Learning Where to See for Navigation: A Self-Supervised Vision-Action Pre-Training Approach"☆21Updated last year
- [CVPR2025] Prior Does Matter: Visual Navigation via Denoising Diffusion Brdige Models☆65Updated 3 months ago
- [CoRL 2023] SOGMP++/SOGMP: Stochastic Occupancy Grid Map Prediction in Dynamic Scenes☆35Updated 4 months ago
- ☆51Updated 10 months ago
- [RA-L'22] Proactive Anomaly Detection for Robot Navigation with Multi-Sensor Fusion☆18Updated 3 years ago
- ☆16Updated 2 years ago
- MuSoHu: Multi-Modal Social Human Navigation Dataset☆27Updated last year
- ☆30Updated 3 years ago
- [IROS 2024] HabiCrowd: A High Performance Simulator for Crowd-Aware Visual Navigation☆35Updated 2 years ago
- Repository for W-RIZZ (RA-L 2024)☆17Updated last year
- The open-sourced code for Learning-to-navigate-by-forgetting☆21Updated last year
- ☆22Updated 2 months ago
- ☆37Updated 3 years ago
- [ICRA 2021] SSCNav: Confidence-Aware Semantic Scene Completion for Visual Semantic Navigation☆46Updated 4 years ago
- [RAL 2025] Official implementation of the paper "Following Is All You Need: Robot Crowd Navigation Using People As Planners"☆17Updated 3 months ago
- (ICRA 2024) SCALE: Self-Correcting Visual Navigation for Mobile Robots via Anti-Novelty Estimation☆11Updated 11 months ago
- Track 2: Social Navigation☆24Updated 4 months ago
- Vision-based off-road navigation with geographical hints☆48Updated last year
- Code for LGX (Language Guided Exploration). We use LLMs to perform embodied robot navigation in a zero-shot manner.☆67Updated 2 years ago
- Aligning Knowledge Graph with Visual Perception for Object-goal Navigation (ICRA 2024)☆41Updated 9 months ago
- ☆26Updated 3 years ago
- [RA-L] DRAGON: A Dialogue-Based Robot for Assistive Navigation with Visual Language Grounding☆17Updated last year
- https://xgxvisnav.github.io/☆21Updated last year
- BehAV: Behavioral Rule Guided Autonomy Using VLM for Robot Navigation in Outdoor Scenes (ICRA'25)☆35Updated last year
- ☆19Updated 3 years ago