TRI-ML / prismatic-vlmsLinks
A flexible and efficient codebase for training visually-conditioned language models (VLMs)
β772Updated last year
Alternatives and similar repositories for prismatic-vlms
Users that are interested in prismatic-vlms are comparing it to the libraries listed below
Sorting:
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.β510Updated 8 months ago
- Compose multimodal datasets πΉβ466Updated 3 weeks ago
- Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long cβ¦β664Updated this week
- Official Repo for Fine-Tuning Large Vision-Language Models as Decision-Making Agents via Reinforcement Learningβ384Updated 8 months ago
- Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Successβ627Updated 4 months ago
- Implementation of "PaLM-E: An Embodied Multimodal Language Model"β319Updated last year
- Embodied Chain of Thought: A robotic policy that reason to solve the task.β301Updated 4 months ago
- Official repo and evaluation implementation of VSI-Benchβ575Updated 3 weeks ago
- OpenEQA Embodied Question Answering in the Era of Foundation Modelsβ312Updated 11 months ago
- Embodied Reasoning Question Answer (ERQA) Benchmarkβ207Updated 5 months ago
- Evaluating and reproducing real-world robot manipulation policies (e.g., RT-1, RT-1-X, Octo) in simulation under common setups (e.g., Gooβ¦β735Updated 5 months ago
- Code for the Molmo Vision-Language Modelβ735Updated 8 months ago
- β386Updated 7 months ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videosβ356Updated 7 months ago
- CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasksβ656Updated last month
- Instruct2Act: Mapping Multi-modality Instructions to Robotic Actions with Large Language Modelβ368Updated last year
- A curated list of awesome papers on Embodied AI and related research/industry-driven resources.β464Updated 2 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!β871Updated 5 months ago
- A most Frontend Collection and survey of vision-language model papers, and models GitHub repository. Continuous updates.β337Updated last month
- A Framework of Small-scale Large Multimodal Modelsβ881Updated 4 months ago
- Implementation of Οβ, the robotic foundation model architecture proposed by Physical Intelligenceβ486Updated last month
- OpenVLA: An open-source vision-language-action model for robotic manipulation.β250Updated 5 months ago
- Embodied Agent Interface (EAI): Benchmarking LLMs for Embodied Decision Making (NeurIPS D&B 2024 Oral)β239Updated 5 months ago
- Democratization of RT-2 "RT-2: New model translates vision and language into action"β501Updated last year
- world modeling challenge for humanoid robotsβ506Updated 9 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [π₯the first paper to explore R1 for video]β670Updated last month
- VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioningβ122Updated 11 months ago
- Benchmarking Knowledge Transfer in Lifelong Robot Learningβ834Updated 5 months ago
- [ICLR'25] LLaRA: Supercharging Robot Learning Data for Vision-Language Policyβ223Updated 5 months ago
- Pytorch implementation of the models RT-1-X and RT-2-X from the paper: "Open X-Embodiment: Robotic Learning Datasets and RT-X Models"β221Updated last week