facebookresearch / jepaLinks
PyTorch code and models for V-JEPA self-supervised learning from video.
☆3,086Updated 3 months ago
Alternatives and similar repositories for jepa
Users that are interested in jepa are comparing it to the libraries listed below
Sorting:
- Official codebase for I-JEPA, the Image-based Joint-Embedding Predictive Architecture. First outlined in the CVPR paper, "Self-supervised…☆2,994Updated last year
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆7,449Updated last year
- 4M: Massively Multimodal Masked Modeling☆1,735Updated 2 weeks ago
- Schedule-Free Optimization in PyTorch☆2,179Updated last month
- PyTorch native post-training library☆5,273Updated this week
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆2,950Updated last month
- A PyTorch native platform for training generative AI models☆3,933Updated this week
- Training LLMs with QLoRA + FSDP☆1,485Updated 7 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,303Updated last month
- Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.☆5,993Updated 2 months ago
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,570Updated 7 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,460Updated 3 months ago
- Repository for Meta Chameleon, a mixed-modal early-fusion foundation model from FAIR.☆2,016Updated 10 months ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,912Updated 7 months ago
- Mixture-of-Experts for Large Vision-Language Models☆2,181Updated 6 months ago
- ☆4,088Updated last year
- Next-Token Prediction is All You Need☆2,149Updated 3 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,344Updated this week
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,544Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆982Updated 10 months ago
- 【EMNLP 2024🔥】Video-LLaVA: Learning United Visual Representation by Alignment Before Projection☆3,281Updated 6 months ago
- Simple, minimal implementation of the Mamba SSM in one file of PyTorch.☆2,820Updated last year
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,779Updated 10 months ago
- VideoSys: An easy and efficient system for video generation☆1,980Updated 3 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆881Updated last month
- EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything☆2,357Updated 5 months ago
- ☆1,824Updated 11 months ago
- ☆3,930Updated last week
- Official repository for our work on micro-budget training of large-scale diffusion models.☆1,481Updated 5 months ago
- Thunder gives you PyTorch models superpowers for training and inference. Unlock out-of-the-box optimizations for performance, memory and …☆1,365Updated this week