ZiliangMiao / Multimodal_Large_Language_Model_Research
☆35Updated last year
Alternatives and similar repositories for Multimodal_Large_Language_Model_Research:
Users that are interested in Multimodal_Large_Language_Model_Research are comparing it to the libraries listed below
- ☆79Updated last year
- ☆60Updated 4 months ago
- ☆69Updated 2 months ago
- Code for Prompt a Robot to Walk with Large Language Models https://arxiv.org/abs/2309.09969☆91Updated last year
- Code for LGX (Language Guided Exploration). We use LLMs to perform embodied robot navigation in a zero-shot manner.☆55Updated last year
- Code for "Unleashing Large-Scale Video Generative Pre-training for Visual Robot Manipulation"☆43Updated 9 months ago
- ☆38Updated 11 months ago
- Codebase for paper: RoCo: Dialectic Multi-Robot Collaboration with Large Language Models☆177Updated last year
- Official implementation of "Data Scaling Laws in Imitation Learning for Robotic Manipulation"☆139Updated 3 months ago
- A simulation framework based on ROS2 and LLMs(like GPT) for robot interaction tasks in the era of large models☆114Updated 8 months ago
- This repo contains a curative list of robot learning (mainly for manipulation) resources.☆163Updated 5 months ago
- ☆35Updated 3 months ago
- Official GitHub Repository for Paper "Bridging Zero-shot Object Navigation and Foundation Models through Pixel-Guided Navigation Skill", …☆82Updated 3 months ago
- Enhancing LLM/VLM capability for robot task and motion planning with extra algorithm based tools.☆58Updated 4 months ago
- ☆64Updated last month
- ☆39Updated 2 months ago
- Code repository for DynaCon: Dynamic Robot Planner with Contextual Awareness via LLMs. This package is for ROS Noetic.☆21Updated last year
- 2023 Mobile Robot Grasping and Navigation Challenge☆22Updated last year
- A simple testbed for robotics manipulation policies☆75Updated 3 weeks ago
- Official implementation of Matcha-agent, https://arxiv.org/abs/2303.08268☆24Updated 5 months ago
- [IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models☆83Updated 5 months ago
- Vision-Language Navigation Benchmark in Isaac Lab☆85Updated last month
- Mobile manipulation in Habitat☆75Updated 2 months ago
- ☆59Updated this week
- Low-level locomotion policy training in Isaac Lab☆103Updated 2 months ago
- A collection of papers, codes and talks of visual imitation learning/imitation learning from video for robotics.☆56Updated 2 years ago
- LLM3: Large Language Model-based Task and Motion Planning with Motion Failure Reasoning☆76Updated 8 months ago
- Awesome Lists about Robot Learning.☆65Updated 2 months ago
- Manipulate-Anything: Automating Real-World Robots using Vision-Language Models [CoRL 2024]☆14Updated 2 months ago