PyTorch code and models for VJEPA2 self-supervised learning from video.
☆3,059Aug 28, 2025Updated 6 months ago
Alternatives and similar repositories for vjepa2
Users that are interested in vjepa2 are comparing it to the libraries listed below
Sorting:
- Reference PyTorch implementation and models for DINOv3☆9,740Feb 17, 2026Updated 2 weeks ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,181Feb 11, 2026Updated 3 weeks ago
- [ICLR 2025] LAPA: Latent Action Pretraining from Videos☆472Jan 22, 2025Updated last year
- ☆10,349Dec 27, 2025Updated 2 months ago
- Official PyTorch Implementation of Unified Video Action Model (RSS 2025)☆338Jul 23, 2025Updated 7 months ago
- PyTorch code and models for V-JEPA self-supervised learning from video.☆3,566Feb 27, 2025Updated last year
- Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long c…☆920Jan 6, 2026Updated 2 months ago
- NVIDIA Isaac GR00T N1.6 - A Foundation Model for Generalist Robots.☆6,275Feb 27, 2026Updated last week
- New repo collection for NVIDIA Cosmos: https://github.com/nvidia-cosmos☆8,082Jan 6, 2026Updated 2 months ago
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆5,383Mar 23, 2025Updated 11 months ago
- RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning☆1,672Feb 23, 2026Updated last week
- ICCV 2025 | TesserAct: Learning 4D Embodied World Models☆380Aug 4, 2025Updated 7 months ago
- Cosmos-Transfer1 is a world-to-world transfer model designed to bridge the perceptual divide between simulated and real-world environment…☆779Jan 6, 2026Updated 2 months ago
- A suite of image and video neural tokenizers☆1,711Feb 11, 2025Updated last year
- The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained mode…☆18,560Dec 25, 2024Updated last year
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆994Nov 19, 2025Updated 3 months ago
- A generative world for general-purpose robotics & embodied AI learning.☆28,216Updated this week
- Galaxea's open-source VLA repository☆534Feb 14, 2026Updated 3 weeks ago
- MAGI-1: Autoregressive Video Generation at Scale☆3,647Jun 17, 2025Updated 8 months ago
- [CVPR 2025 Best Paper Award] VGGT: Visual Geometry Grounded Transformer☆12,484Oct 11, 2025Updated 4 months ago
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,887Jan 8, 2026Updated last month
- Official repo and evaluation implementation of VSI-Bench☆675Aug 5, 2025Updated 7 months ago
- Cosmos-Predict2 is a collection of general-purpose world foundation models for Physical AI that can be fine-tuned into customized world m…☆746Oct 29, 2025Updated 4 months ago
- [ICCV 2025 & ICCV 2025 RIWM Outstanding Paper] Aether: Geometric-Aware Unified World Modeling☆575Oct 26, 2025Updated 4 months ago
- 🤗 LeRobot: Making AI for Robotics more accessible with end-to-end learning☆21,981Updated this week
- [IROS 2025 Best Paper Award Finalist & IEEE TRO 2026] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems☆2,804Dec 16, 2025Updated 2 months ago
- Official implementation of Continuous 3D Perception Model with Persistent State☆1,345Aug 27, 2025Updated 6 months ago
- 🔥 SpatialVLA: a spatial-enhanced vision-language-action model that is trained on 1.1 Million real robot episodes. Accepted at RSS 2025.☆657Jun 23, 2025Updated 8 months ago
- ☆384Mar 24, 2025Updated 11 months ago
- [ICML 2024] 3D-VLA: A 3D Vision-Language-Action Generative World Model☆623Oct 29, 2024Updated last year
- PyTorch code and models for the DINOv2 self-supervised learning method.☆12,465Updated this week
- RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation☆1,625Jan 21, 2026Updated last month
- code for "Diffusion Forcing: Next-token Prediction Meets Full-Sequence Diffusion"☆1,169Nov 9, 2025Updated 3 months ago
- [CVPR'25 Oral] MoGe: Unlocking Accurate Monocular Geometry Estimation for Open-Domain Images with Optimal Training Supervision☆2,328Nov 2, 2025Updated 4 months ago
- Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆18,505Jan 30, 2026Updated last month
- [RSS 2023] Diffusion Policy Visuomotor Policy Learning via Action Diffusion☆3,820Dec 24, 2024Updated last year
- SAPIEN Manipulation Skill Framework, an open source GPU parallelized robotics simulator and benchmark, led by Hillbot, Inc.☆2,629Updated this week
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆3,292Nov 11, 2025Updated 3 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,771Nov 28, 2025Updated 3 months ago