facebookresearch / perception_modelsLinks
State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!
☆1,607Updated 2 weeks ago
Alternatives and similar repositories for perception_models
Users that are interested in perception_models are comparing it to the libraries listed below
Sorting:
- Code for the Molmo Vision-Language Model☆748Updated 9 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,339Updated 2 weeks ago
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,333Updated 2 months ago
- PyTorch code and models for VJEPA2 self-supervised learning from video.☆2,198Updated 3 weeks ago
- A suite of image and video neural tokenizers☆1,669Updated 7 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,364Updated last month
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆1,029Updated 7 months ago
- [ICCV 2025] OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision Encoders for Multimodal Learning☆378Updated this week
- Famous Vision Language Models and Their Architectures☆1,009Updated 6 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,675Updated 2 weeks ago
- 4M: Massively Multimodal Masked Modeling☆1,764Updated 3 months ago
- Efficient Track Anything☆629Updated 8 months ago
- LightlyTrain is the first PyTorch framework to pretrain computer vision models on unlabeled data for industrial applications☆868Updated last week
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,222Updated last month
- This repository is a curated collection of the most exciting and influential CVPR 2024 papers. 🔥 [Paper + Code + Demo]☆738Updated 3 months ago
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆338Updated 3 months ago
- Frontier Multimodal Foundation Models for Image and Video Understanding☆981Updated last month
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,124Updated 4 months ago
- Reference PyTorch implementation and models for DINOv3☆7,021Updated this week
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024☆1,582Updated last year
- Official implementation of BLIP3o-Series☆1,477Updated this week
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,861Updated last year
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆912Updated last month
- [ICLR 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,696Updated last week
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆546Updated 2 months ago
- Eagle: Frontier Vision-Language Models with Data-Centric Strategies☆865Updated last month
- About This repository is a curated collection of the most exciting and influential CVPR 2025 papers. 🔥 [Paper + Code + Demo]☆777Updated 3 months ago
- Compose multimodal datasets 🎹☆474Updated last month
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆2,790Updated last week
- This repo contains the code for 1D tokenizer and generator☆1,027Updated 5 months ago