facebookresearch / ijepaLinks
Official codebase for I-JEPA, the Image-based Joint-Embedding Predictive Architecture. First outlined in the CVPR paper, "Self-supervised learning from images with a joint-embedding predictive architecture."
☆3,067Updated last year
Alternatives and similar repositories for ijepa
Users that are interested in ijepa are comparing it to the libraries listed below
Sorting:
- PyTorch code and models for V-JEPA self-supervised learning from video.☆3,200Updated 6 months ago
- An open-source framework for training large multimodal models.☆4,005Updated last year
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,901Updated last year
- Foundation Architecture for (M)LLMs☆3,113Updated last year
- Meta-Transformer for Unified Multimodal Learning☆1,627Updated last year
- An Open-source Toolkit for LLM Development☆2,790Updated 8 months ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,124Updated 3 months ago
- [NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"☆4,716Updated last year
- This repository contains the official implementation of the research paper, "FastViT: A Fast Hybrid Vision Transformer using Structural R…☆1,954Updated last year
- The implementation of "Prismer: A Vision-Language Model with Multi-Task Experts".☆1,309Updated last year
- ☆1,709Updated 11 months ago
- Official repo for consistency models.☆6,409Updated last year
- Implementation of I-JEPA from "Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture"☆271Updated 8 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,022Updated last year
- A general representation model across vision, audio, language modalities. Paper: ONE-PEACE: Exploring One General Representation Model To…☆1,052Updated 11 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,364Updated last month
- 4M: Massively Multimodal Masked Modeling☆1,764Updated 3 months ago
- Segment Anything in High Quality [NeurIPS 2023]☆4,064Updated this week
- Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"☆7,813Updated last year
- PyTorch code and models for the DINOv2 self-supervised learning method.☆11,567Updated 3 weeks ago
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"☆1,204Updated last year
- Painter & SegGPT Series: Vision Foundation Models from BAAI☆2,580Updated 9 months ago
- Multimodal-GPT☆1,509Updated 2 years ago
- [EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding☆3,064Updated last year
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,675Updated 2 weeks ago
- Emu Series: Generative Multimodal Models from BAAI☆1,744Updated 11 months ago
- ImageBind One Embedding Space to Bind Them All☆8,791Updated this week
- [CVPR2024 Highlight][VideoChatGPT] ChatGPT with video understanding! And many more supported LMs such as miniGPT4, StableLM, and MOSS.☆3,298Updated 7 months ago
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,235Updated 2 months ago
- Personalize Segment Anything Model (SAM) with 1 shot in 10 seconds☆1,615Updated last year