TRI-ML / prismatic-vlmsLinks
A flexible and efficient codebase for training visually-conditioned language models (VLMs)
☆758Updated last year
Alternatives and similar repositories for prismatic-vlms
Users that are interested in prismatic-vlms are comparing it to the libraries listed below
Sorting:
- Compose multimodal datasets 🎹☆455Updated 2 weeks ago
- Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long c…☆581Updated last week
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.☆505Updated 8 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success☆570Updated 3 months ago
- Code for the Molmo Vision-Language Model☆610Updated 7 months ago
- Official Repo for Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learning☆376Updated 7 months ago
- Implementation of "PaLM-E: An Embodied Multimodal Language Model"☆316Updated last year
- Embodied Chain of Thought: A robotic policy that reason to solve the task.☆284Updated 4 months ago
- Embodied Reasoning Question Answer (ERQA) Benchmark☆191Updated 4 months ago
- OpenEQA Embodied Question Answering in the Era of Foundation Models☆306Updated 10 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆341Updated 6 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Goo…☆707Updated 4 months ago
- Official repo and evaluation implementation of VSI-Bench☆560Updated this week
- CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks☆640Updated last month
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆870Updated 5 months ago
- Implementation of π₀, the robotic foundation model architecture proposed by Physical Intelligence☆476Updated last week
- world modeling challenge for humanoid robots☆501Updated 9 months ago
- ☆378Updated 6 months ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policy☆220Updated 4 months ago
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)☆221Updated 5 months ago
- Latest Advances on Vison-Language-Action Models.☆88Updated 5 months ago
- Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Model☆366Updated last year
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆656Updated last year
- A curated list of awesome papers on Embodied AI and related research/industry-driven resources.☆460Updated 2 months ago
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆628Updated this week
- Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"☆218Updated last week
- Online RL with Simple Reward Enables Training VLA Models with Only One Trajectory☆343Updated last month
- Democratization of RT-2 "RT-2: New model translates vision and language into action"☆490Updated last year
- ☆621Updated last year
- VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning☆120Updated 10 months ago