TRI-ML / prismatic-vlmsLinks
A flexible and efficient codebase for training visually-conditioned language models (VLMs)
β909Updated last year
Alternatives and similar repositories for prismatic-vlms
Users that are interested in prismatic-vlms are comparing it to the libraries listed below
Sorting:
- Official Repo for Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learningβ405Updated last year
- Compose multimodal datasets πΉβ542Updated 3 weeks ago
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.β523Updated last year
- Implementation of "PaLM-E: An Embodied Multimodal Language Model"β333Updated last year
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Successβ991Updated 4 months ago
- Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long cβ¦β886Updated 3 weeks ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Gooβ¦β952Updated last month
- Embodied Reasoning Question Answer (ERQA) Benchmarkβ255Updated 10 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.β361Updated 9 months ago
- Code for the Molmo Vision-Language Modelβ863Updated last year
- Implementation of Οβ, the robotic foundation model architecture proposed by Physical Intelligenceβ559Updated this week
- Official repo and evaluation implementation of VSI-Benchβ661Updated 5 months ago
- OpenEQA Embodied Question Answering in the Era of Foundation Modelsβ339Updated last year
- [ICLR 2025] LAPA: Latent Action Pretraining from Videosβ445Updated last year
- β422Updated last month
- SimpleVLA-RL: Scaling VLA Training via Reinforcement Learningβ1,302Updated 3 weeks ago
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)β278Updated 10 months ago
- CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasksβ809Updated 4 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!β874Updated 10 months ago
- Benchmarking Knowledge Transfer in Lifelong Robot Learningβ1,410Updated 10 months ago
- Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"β234Updated this week
- Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Modelβ371Updated last year
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β806Updated last month
- Democratization of RT-2 "RT-2: New model translates vision and language into action"β545Updated last year
- VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioningβ135Updated last year
- OpenVLA: An open-source vision-language-action model for robotic manipulation.β329Updated 10 months ago
- world modeling challenge for humanoid robotsβ545Updated last year
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β939Updated 5 months ago
- Code for RoboFlamingoβ421Updated last year
- A most Frontend Collection and survey of vision-language model papers, and models GitHub repository. Continuous updates.β506Updated 2 weeks ago