jzhzhang / Uni-NaVidLinks

[RSS 2025] Uni-NaVid: A Video-based Vision-Language-Action Model for Unifying Embodied Navigation Tasks.
38Updated last week

Alternatives and similar repositories for Uni-NaVid

Users that are interested in Uni-NaVid are comparing it to the libraries listed below

Sorting: