facebookresearch / vjepa2Links
PyTorch code and models for VJEPA2 self-supervised learning from video.
☆2,177Updated 2 weeks ago
Alternatives and similar repositories for vjepa2
Users that are interested in vjepa2 are comparing it to the libraries listed below
Sorting:
- Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long c…☆690Updated 2 weeks ago
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆1,594Updated last week
- A suite of image and video neural tokenizers☆1,668Updated 7 months ago
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,323Updated 2 months ago
- [CVPR 2025] Magma: A Foundation Model for Multimodal AI Agents☆1,802Updated 3 months ago
- Code for the Molmo Vision-Language Model☆739Updated 9 months ago
- MMaDA - Open-Sourced Multimodal Large Diffusion Language Models☆1,341Updated 3 weeks ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,336Updated last week
- Cosmos-Transfer1 is a world-to-world transfer model designed to bridge the perceptual divide between simulated and real-world environment…☆647Updated this week
- Reference PyTorch implementation and models for DINOv3☆6,749Updated last week
- This repo contains the code for the paper "Intuitive physics understanding emerges fromself-supervised pretraining on natural videos"☆176Updated 6 months ago
- A generative and self-guided robotic agent that endlessly propose and master new skills.☆1,065Updated last year
- Continuous Thought Machines, because thought takes time and reasoning is a process.☆1,286Updated last month
- OpenVLA: An open-source vision-language-action model for robotic manipulation.☆3,750Updated 5 months ago
- ☆1,395Updated 9 months ago
- RoboBrain 2.0: Advanced version of RoboBrain. See Better. Think Harder. Do Smarter. 🎉🎉🎉☆586Updated 2 weeks ago
- Official repo and evaluation implementation of VSI-Bench☆589Updated last month
- Re-implementation of pi0 vision-language-action (VLA) model from Physical Intelligence☆1,135Updated 7 months ago
- RoboVerse: Towards a Unified Platform, Dataset and Benchmark for Scalable and Generalizable Robot Learning☆1,416Updated last week
- A flexible and efficient codebase for training visually-conditioned language models (VLMs)☆785Updated last year
- [IROS 2025] The Large-scale Manipulation Platform for Scalable and Intelligent Embodied Systems☆2,338Updated 2 weeks ago
- Heterogeneous Pre-trained Transformer (HPT) as Scalable Policy Learner.☆509Updated 9 months ago
- SpatialLM: Training Large Language Models for Structured Indoor Modeling☆3,942Updated last week
- SAPIEN Manipulation Skill Framework, an open source GPU parallelized robotics simulator and benchmark, led by Hillbot, Inc.☆2,038Updated this week
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,200Updated last month
- Implementation of π₀, the robotic foundation model architecture proposed by Physical Intelligence☆501Updated last month
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,053Updated last month
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,528Updated last month
- [RSS 2025] Learning to Act Anywhere with Task-centric Latent Actions☆711Updated 3 weeks ago
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆1,416Updated 2 months ago