facebookresearch / jepa
PyTorch code and models for V-JEPA self-supervised learning from video.
☆2,906Updated last month
Alternatives and similar repositories for jepa:
Users that are interested in jepa are comparing it to the libraries listed below
- Official codebase for I-JEPA, the Image-based Joint-Embedding Predictive Architecture. First outlined in the CVPR paper, "Self-supervised…☆2,961Updated 11 months ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆1,984Updated 8 months ago
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆7,162Updated 10 months ago
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,225Updated 4 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,267Updated 4 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆2,815Updated last month
- Mixture-of-Experts for Large Vision-Language Models☆2,148Updated 4 months ago
- ☆3,712Updated last month
- 4M: Massively Multimodal Masked Modeling☆1,713Updated last month
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,890Updated 5 months ago
- Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"☆3,270Updated 11 months ago
- ☆4,077Updated 10 months ago
- PyTorch native post-training library☆5,103Updated this week
- A PyTorch native library for large-scale model training☆3,607Updated this week
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,708Updated 8 months ago
- VideoSys: An easy and efficient system for video generation☆1,956Updated last month
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,420Updated last month
- [TMLR 2025] Latte: Latent Diffusion Transformer for Video Generation.☆1,813Updated last week
- Next-Token Prediction is All You Need☆2,090Updated last month
- The official PyTorch implementation of Google's Gemma models☆5,419Updated last month
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,138Updated 3 weeks ago
- Consistency Distilled Diff VAE☆2,175Updated last year
- Training LLMs with QLoRA + FSDP☆1,470Updated 5 months ago
- A suite of image and video neural tokenizers☆1,614Updated 2 months ago
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding☆2,987Updated 10 months ago
- An Open-source Toolkit for LLM Development☆2,770Updated 3 months ago
- Emu Series: Generative Multimodal Models from BAAI☆1,711Updated 6 months ago
- Large World Model -- Modeling Text and Video with Millions Context☆7,269Updated 6 months ago
- ☆1,798Updated 9 months ago
- MiniSora: A community aims to explore the implementation path and future development direction of Sora.☆1,267Updated 2 months ago