RobotiXX / multimodal-fusion-networkLinks
This repository contains all the code for Parsing, Transforming and Training Multimodal Deep Learning Network, for Social Robot Navigation.
☆15Updated 2 years ago
Alternatives and similar repositories for multimodal-fusion-network
Users that are interested in multimodal-fusion-network are comparing it to the libraries listed below
Sorting:
- ☆11Updated 2 years ago
- Scalable Autonomous Control for Social Navigation☆26Updated last year
- PyTorch implementation of "Learning Where to See for Navigation: A Self-Supervised Vision-Action Pre-Training Approach"☆21Updated last year
- Last-Mile Embodied Visual Navigation https://jbwasse2.github.io/portfolio/SLING/☆28Updated 3 years ago
- [IROS 2024] HabiCrowd: A High Performance Simulator for Crowd-Aware Visual Navigation☆35Updated 2 years ago
- ☆51Updated 11 months ago
- (ICRA 2024) SCALE: Self-Correcting Visual Navigation for Mobile Robots via Anti-Novelty Estimation☆11Updated last year
- [CoRL 2023] SOGMP++/SOGMP: Stochastic Occupancy Grid Map Prediction in Dynamic Scenes☆35Updated 5 months ago
- [RA-L'22] Proactive Anomaly Detection for Robot Navigation with Multi-Sensor Fusion☆18Updated 3 years ago
- ☆30Updated 3 years ago
- ☆21Updated 3 months ago
- ☆28Updated 3 years ago
- [TRO-2025] SCOPE: Stochastic Cartographic Occupancy Prediction Engine for Uncertainty-Aware Dynamic Navigation☆28Updated 5 months ago
- [CVPR2025] Prior Does Matter: Visual Navigation via Denoising Diffusion Brdige Models☆67Updated 3 months ago
- Official code release for "Navigation with Large Language Models: Semantic Guesswork as a Heuristic for Planning"☆57Updated 2 years ago
- ☆31Updated 7 months ago
- ☆17Updated 2 years ago
- Track 2: Social Navigation☆24Updated 4 months ago
- MuSoHu: Multi-Modal Social Human Navigation Dataset☆27Updated last year
- ☆19Updated last year
- ☆17Updated 2 years ago
- [RA-L/ICRA2025] Official implementation for paper "Diverse Controllable Diffusion Policy with Signal Temporal Logic."☆33Updated last year
- [RA-L] DRAGON: A Dialogue-Based Robot for Assistive Navigation with Visual Language Grounding☆18Updated last year
- Repository for W-RIZZ (RA-L 2024)☆17Updated last year
- Code and Data for Paper: Boosting Efficient Reinforcement Learning for Vision-and-Language Navigation With Open-Sourced LLM☆16Updated 11 months ago
- Accompanying code for the paper "Conditional Unscented Autoencoders for Trajectory Prediction"☆15Updated last year
- [RAL 2025] Official implementation of the paper "Following Is All You Need: Robot Crowd Navigation Using People As Planners"☆17Updated 4 months ago
- The open-sourced code for Learning-to-navigate-by-forgetting☆21Updated last year
- Code for LGX (Language Guided Exploration). We use LLMs to perform embodied robot navigation in a zero-shot manner.☆67Updated 2 years ago
- ☆19Updated 3 years ago