TRI-ML / prismatic-vlms
A flexible and efficient codebase for training visually-conditioned language models (VLMs)
β675Updated 10 months ago
Alternatives and similar repositories for prismatic-vlms
Users that are interested in prismatic-vlms are comparing it to the libraries listed below
Sorting:
- Compose multimodal datasets πΉβ371Updated 3 weeks ago
- Official Repo for Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learningβ350Updated 5 months ago
- Embodied Chain of Thought: A robotic policy that reason to solve the task.β239Updated last month
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.β493Updated 5 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Successβ386Updated 2 weeks ago
- Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long cβ¦β317Updated last month
- Official repo and evaluation implementation of VSI-Benchβ481Updated 2 months ago
- Implementation of "PaLM-E: An Embodied Multimodal Language Model"β302Updated last year
- Democratization of RT-2 "RT-2: New model translates vision and language into action"β451Updated 9 months ago
- Implementation of Οβ, the robotic foundation model architecture proposed by Physical Intelligenceβ418Updated last week
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Gooβ¦β609Updated last month
- OpenEQA Embodied Question Answering in the Era of Foundation Modelsβ281Updated 7 months ago
- β344Updated 3 months ago
- world modeling challenge for humanoid robotsβ484Updated 6 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videosβ248Updated 3 months ago
- Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"β206Updated this week
- When do we not need larger vision models?β392Updated 3 months ago
- A Framework of Small-scale Large Multimodal Modelsβ817Updated 2 weeks ago
- VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioningβ110Updated 7 months ago
- β609Updated last year
- Embodied Reasoning Question Answer (ERQA) Benchmarkβ153Updated 2 months ago
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generationβ315Updated 3 weeks ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policyβ210Updated last month
- β331Updated last year
- This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Eβ¦β424Updated last month
- [AAAI-25] Cobra: Extending Mamba to Multi-modal Large Language Model for Efficient Inferenceβ276Updated 4 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β515Updated this week
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)β195Updated 2 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought β¦β311Updated 4 months ago
- Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Modelβ362Updated 10 months ago