Official code of "RoboOmni: Proactive Robot Manipulation in Omni-modal Context"
☆90Nov 17, 2025Updated 4 months ago
Alternatives and similar repositories for RoboOmni
Users that are interested in RoboOmni are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- 用Kinova Gen3实机实现Rekep☆11Mar 18, 2025Updated last year
- [CVPR 2025] OmniMMI: A Comprehensive Multi-modal Interaction Benchmark in Streaming Video Contexts☆17Apr 2, 2025Updated 11 months ago
- An open-source personal academic homepage template characterized by its user-friendly design and extensive scalability.☆36Oct 6, 2025Updated 5 months ago
- [NeurIPS2024] Official code for (IMA) Implicit Multimodal Alignment: On the Generalization of Frozen LLMs to Multimodal Inputs☆23Oct 15, 2024Updated last year
- Manipulate-Anything: Automating Real-World Robots using Vision-Language Models [CoRL 2024]☆54Apr 3, 2025Updated 11 months ago
- [CVPR 2025] OmniMMI: A Comprehensive Multi-modal Interaction Benchmark in Streaming Video Contexts☆21Dec 22, 2025Updated 3 months ago
- ☆35Jun 3, 2025Updated 9 months ago
- 哈尔滨工业大学2023春季学期编译系统课程实验、习题、课件以及期末复习材料☆11Jul 30, 2023Updated 2 years ago
- ☆11Mar 11, 2025Updated last year
- [NAACL 2024] Z-GMOT: Zero-shot Generic Multiple Object Tracking☆13May 3, 2024Updated last year
- multi-agent crafter for cooperative tasks☆13Aug 2, 2025Updated 7 months ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year
- Beyond Softmax Loss: Intra-Concentration and Inter-Separability Loss for Classification(I2CS)☆12Aug 11, 2020Updated 5 years ago
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆29Jul 24, 2025Updated 7 months ago
- ☆25Aug 29, 2025Updated 6 months ago
- Official repository of LIBERO-plus, a generalized benchmark for in-depth robustness analysis of vision-language-action models.☆251Jan 21, 2026Updated 2 months ago
- VoxAct-B: Voxel-Based Acting and Stabilizing Policy for Bimanual Manipulation (CoRL 2024)☆52Oct 25, 2024Updated last year
- [ICCV 2025] Official PyTorch Code for "Describe, Adapt and Combine: Empowering CLIP Encoders for Open-set 3D Object Retrieval"☆17Aug 23, 2025Updated 7 months ago
- ☆76Jan 20, 2026Updated 2 months ago
- ☆69Dec 7, 2025Updated 3 months ago
- The official implement of "Grounded Chain-of-Thought for Multimodal Large Language Models"☆21Jul 21, 2025Updated 8 months ago
- Vision-Language-Action Optimization with Trajectory Ensemble Voting☆25Feb 18, 2026Updated last month
- ☆11Oct 12, 2021Updated 4 years ago
- ☆154Feb 25, 2026Updated 3 weeks ago
- ☆40Jul 15, 2025Updated 8 months ago
- slices in group meetings☆12Nov 29, 2020Updated 5 years ago
- Evaluate Multimodal LLMs as Embodied Agents☆56Feb 14, 2025Updated last year
- Free to use editor to create online resume☆18Nov 10, 2023Updated 2 years ago
- InternVLA-M1: A Spatially Guided Vision-Language-Action Framework for Generalist Robot Policy☆387Feb 11, 2026Updated last month
- ReKep Experiment on UR5 based on kinova arm☆13Apr 25, 2025Updated 10 months ago
- ☆21Dec 23, 2025Updated 3 months ago
- Use contrastive learning to train a large language model (LLM) as a retriever☆12Jul 19, 2024Updated last year
- a fully open-source implementation of a GPT-4o-like speech-to-speech video understanding model.☆37Apr 7, 2025Updated 11 months ago
- ☆10Dec 16, 2023Updated 2 years ago
- ☆41Sep 16, 2025Updated 6 months ago
- OmniZip: Audio-Guided Dynamic Token Compression for Fast Omnimodal Large Language Models☆64Feb 1, 2026Updated last month
- ☆18May 5, 2024Updated last year
- 哈尔滨工业大学2022秋 季学期计算机组成原理课程大作业、期末复习材料以及课件☆10Mar 14, 2024Updated 2 years ago
- ☆31Oct 28, 2025Updated 4 months ago