facebookresearch / perception_modelsLinks
State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!
☆2,017Updated last week
Alternatives and similar repositories for perception_models
Users that are interested in perception_models are comparing it to the libraries listed below
Sorting:
- Code for the Molmo Vision-Language Model☆845Updated last year
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,422Updated last week
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,436Updated 6 months ago
- PyTorch code and models for VJEPA2 self-supervised learning from video.☆2,618Updated 4 months ago
- A suite of image and video neural tokenizers☆1,694Updated 10 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,394Updated 4 months ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,786Updated last month
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆1,072Updated 11 months ago
- [ICCV 2025] OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision Encoders for Multimodal Learning☆412Updated 3 weeks ago
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,313Updated 5 months ago
- Frontier Multimodal Foundation Models for Image and Video Understanding☆1,084Updated 4 months ago
- Famous Vision Language Models and Their Architectures☆1,128Updated 10 months ago
- 4M: Massively Multimodal Masked Modeling☆1,781Updated 6 months ago
- This repository contains the official implementation of the research papers, "MobileCLIP" CVPR 2024 and "MobileCLIP2" TMLR August 2025☆1,350Updated 2 months ago
- Efficient Track Anything☆757Updated 11 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆567Updated 3 weeks ago
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,916Updated last year
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,045Updated last year
- [CVPR 2025] Official PyTorch implementation of "EdgeTAM: On-Device Track Anything Model"☆845Updated 3 weeks ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,132Updated 5 months ago
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,835Updated this week
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆359Updated 6 months ago
- [ICCV 2025] LLaVA-CoT, a visual language model capable of spontaneous, systematic reasoning☆2,109Updated 2 weeks ago
- About This repository is a curated collection of the most exciting and influential CVPR 2025 papers. 🔥 [Paper + Code + Demo]☆828Updated 6 months ago
- Official implementation of BLIP3o-Series☆1,612Updated last month
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024☆1,620Updated last year
- Pytorch implementation of Transfusion, "Predict the Next Token and Diffuse Images with One Multi-Modal Model", from MetaAI☆1,288Updated 3 weeks ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆936Updated 4 months ago
- Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long c…☆866Updated last week
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,297Updated 7 months ago