Little-Podi / AdaWorld
The PyTorch implementation of paper: "AdaWorld: Learning Adaptable World Models with Latent Actions".
☆36Updated this week
Alternatives and similar repositories for AdaWorld:
Users that are interested in AdaWorld are comparing it to the libraries listed below
- [CVPR 2024] Situational Awareness Matters in 3D Vision Language Reasoning☆37Updated 3 months ago
- Code&Data for Grounded 3D-LLM with Referent Tokens☆108Updated 2 months ago
- ☆46Updated 3 months ago
- [NeurIPS 2024] Official code repository for MSR3D paper☆44Updated 3 weeks ago
- Code for MultiPLY: A Multisensory Object-Centric Embodied Large Language Model in 3D World☆127Updated 5 months ago
- Code for paper "Grounding Video Models to Actions through Goal Conditioned Exploration".☆44Updated 3 months ago
- MetaSpatial leverages reinforcement learning to enhance 3D spatial reasoning in vision-language models (VLMs), enabling more structured, …☆81Updated this week
- ☆67Updated 6 months ago
- A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and A…☆84Updated 2 weeks ago
- HybridVLA: Collaborative Diffusion and Autoregression in a Unified Vision-Language-Action Model☆130Updated this week
- ☆122Updated 2 months ago
- EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Video Generation☆98Updated 4 months ago
- ☆49Updated 5 months ago
- ☆94Updated 7 months ago
- [ICLR 2025] Official implementation and benchmark evaluation repository of <PhysBench: Benchmarking and Enhancing Vision-Language Models …☆44Updated 2 weeks ago
- GRAPE: Guided-Reinforced Vision-Language-Action Preference Optimization☆98Updated last week
- The official repo for the paper "In-Context Imitation Learning via Next-Token Prediction"☆69Updated last week
- [NeurIPS'24] This repository is the implementation of "SpatialRGPT: Grounded Spatial Reasoning in Vision Language Models"☆154Updated 3 months ago
- ☆85Updated 3 weeks ago
- ☆75Updated 7 months ago
- [ECCV 2024] Empowering 3D Visual Grounding with Reasoning Capabilities☆67Updated 5 months ago
- A Simple yet Effective Pathway to Empowering LLaVA to Understand and Interact with 3D World☆229Updated 4 months ago
- [CoRL 2024] VLM-Grounder: A VLM Agent for Zero-Shot 3D Visual Grounding☆93Updated 4 months ago
- Official implementation of the paper "Unifying 3D Vision-Language Understanding via Promptable Queries"☆73Updated 7 months ago
- [CVPR 2025] 3D-GRAND: Towards Better Grounding and Less Hallucination for 3D-LLMs☆36Updated 9 months ago
- Unified Video Action Model☆128Updated last week
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆199Updated 2 months ago
- [RSS 2024] Learning Manipulation by Predicting Interaction☆101Updated 7 months ago
- Generative World Explorer☆138Updated 4 months ago
- ☆30Updated this week