TRI-ML / prismatic-vlmsLinks
A flexible and efficient codebase for training visually-conditioned language models (VLMs)
β712Updated 11 months ago
Alternatives and similar repositories for prismatic-vlms
Users that are interested in prismatic-vlms are comparing it to the libraries listed below
Sorting:
- Compose multimodal datasets πΉβ413Updated 2 weeks ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Successβ481Updated last month
- Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long cβ¦β517Updated last week
- Embodied Chain of Thought: A robotic policy that reason to solve the task.β267Updated 2 months ago
- Implementation of "PaLM-E: An Embodied Multimodal Language Model"β312Updated last year
- VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioningβ116Updated 9 months ago
- Official Repo for Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learningβ366Updated 6 months ago
- OpenEQA Embodied Question Answering in the Era of Foundation Modelsβ291Updated 9 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Gooβ¦β661Updated 2 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videosβ312Updated 5 months ago
- β614Updated last year
- β363Updated 5 months ago
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.β501Updated 6 months ago
- Embodied Reasoning Question Answer (ERQA) Benchmarkβ167Updated 3 months ago
- Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Modelβ365Updated last year
- Official repo and evaluation implementation of VSI-Benchβ522Updated 3 months ago
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)β209Updated 3 months ago
- CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasksβ597Updated 4 months ago
- [Neurips'24 Spotlight] Visual CoT: Advancing Multi-Modal Language Models with a Comprehensive Dataset and Benchmark for Chain-of-Thought β¦β334Updated 6 months ago
- Code for the Molmo Vision-Language Modelβ521Updated 6 months ago
- world modeling challenge for humanoid robotsβ490Updated 7 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β577Updated 3 weeks ago
- Democratization of RT-2 "RT-2: New model translates vision and language into action"β475Updated 11 months ago
- Implementation of Οβ, the robotic foundation model architecture proposed by Physical Intelligenceβ441Updated 2 weeks ago
- A Framework of Small-scale Large Multimodal Modelsβ841Updated 2 months ago
- Suite of human-collected datasets and a multi-task continuous control benchmark for open vocabulary visuolinguomotor learning.β317Updated last week
- [ICLR 2025] VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generationβ353Updated 2 months ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Languaβ¦β439Updated 5 months ago
- Paper list in the survey paper: Toward General-Purpose Robots via Foundation Models: A Survey and Meta-Analysisβ434Updated 5 months ago
- Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"β211Updated last week