PyTorch code and models for V-JEPA self-supervised learning from video.
☆3,566Feb 27, 2025Updated last year
Alternatives and similar repositories for jepa
Users that are interested in jepa are comparing it to the libraries listed below
Sorting:
- Official codebase for I-JEPA, the Image-based Joint-Embedding Predictive Architecture. First outlined in the CVPR paper, "Self-supervised…☆3,247May 8, 2024Updated last year
- Large World Model -- Modeling Text and Video with Millions Context☆7,399Oct 19, 2024Updated last year
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆8,382May 31, 2024Updated last year
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,478Aug 12, 2024Updated last year
- PyTorch code and models for the DINOv2 self-supervised learning method.☆12,427Feb 24, 2026Updated last week
- This project aim to reproduce Sora (Open AI T2V model), we wish the open source community contribute to this project.☆12,134Oct 29, 2025Updated 4 months ago
- Mamba SSM architecture☆17,257Feb 18, 2026Updated 2 weeks ago
- ImageBind One Embedding Space to Bind Them All☆8,980Nov 21, 2025Updated 3 months ago
- Minimal, clean code for the Byte Pair Encoding (BPE) algorithm commonly used in LLM tokenization.☆10,347Jul 1, 2024Updated last year
- Open-Sora: Democratizing Efficient Video Production for All☆28,632Apr 30, 2025Updated 10 months ago
- VideoSys: An easy and efficient system for video generation☆2,016Aug 27, 2025Updated 6 months ago
- [TMLR 2025] Latte: Latent Diffusion Transformer for Video Generation.☆1,920Oct 30, 2025Updated 4 months ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,815Nov 27, 2025Updated 3 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,371May 19, 2025Updated 9 months ago
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,303Jul 15, 2025Updated 7 months ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆22,030Jan 23, 2026Updated last month
- [NeurIPS 2024 Best Paper Award][GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Mod…☆8,626Nov 10, 2025Updated 3 months ago
- The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained mode…☆18,560Dec 25, 2024Updated last year
- This repo contains the code for the paper "Intuitive physics understanding emerges fromself-supervised pretraining on natural videos"☆223Feb 17, 2025Updated last year
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,167Nov 18, 2024Updated last year
- Fast and memory-efficient exact attention☆22,361Feb 25, 2026Updated last week
- Generative Models by Stability AI☆26,943Dec 16, 2025Updated 2 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,766Nov 28, 2025Updated 3 months ago
- [CVPR 2024] Real-Time Open-Vocabulary Object Detection☆6,217Feb 26, 2025Updated last year
- Consistency Distilled Diff VAE☆2,209Nov 7, 2023Updated 2 years ago
- a state-of-the-art-level open visual language model | 多模态预训练模型☆6,724May 29, 2024Updated last year
- 4M: Massively Multimodal Masked Modeling☆1,788Jun 2, 2025Updated 9 months ago
- Modeling, training, eval, and inference code for OLMo☆6,326Nov 24, 2025Updated 3 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,402Aug 4, 2025Updated 7 months ago
- [ECCV2024] VideoMamba: State Space Model for Efficient Video Understanding☆1,082Jul 6, 2024Updated last year
- An open source implementation of CLIP.☆13,430Updated this week
- Tools for merging pretrained large language models.☆6,826Updated this week
- VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models☆5,032Jan 9, 2026Updated last month
- [CVPR 2024] Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data. Foundation Model for Monocular Depth Estimation☆8,006Jul 17, 2024Updated last year
- ☆4,577Sep 14, 2025Updated 5 months ago
- Train transformer language models with reinforcement learning.☆17,460Updated this week
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,986Nov 7, 2025Updated 3 months ago
- DeepSeek-VL: Towards Real-World Vision-Language Understanding☆4,076Apr 24, 2024Updated last year
- Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"☆3,334May 4, 2024Updated last year