facebookresearch / perception_modelsLinks
State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!
☆2,087Updated last month
Alternatives and similar repositories for perception_models
Users that are interested in perception_models are comparing it to the libraries listed below
Sorting:
- Code for the Molmo Vision-Language Model☆858Updated last year
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,427Updated 2 weeks ago
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,443Updated 6 months ago
- PyTorch code and models for VJEPA2 self-supervised learning from video.☆2,813Updated 4 months ago
- A suite of image and video neural tokenizers☆1,699Updated 11 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,394Updated 5 months ago
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,801Updated last month
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,323Updated 5 months ago
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆1,075Updated 11 months ago
- 4M: Massively Multimodal Masked Modeling☆1,783Updated 7 months ago
- [ICCV 2025] OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision Encoders for Multimodal Learning☆413Updated last month
- Frontier Multimodal Foundation Models for Image and Video Understanding☆1,093Updated 5 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆939Updated 5 months ago
- Famous Vision Language Models and Their Architectures☆1,149Updated last week
- Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long c…☆883Updated 2 weeks ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,324Updated 8 months ago
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆3,217Updated 2 months ago
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,853Updated last week
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,049Updated last year
- Efficient Track Anything☆765Updated last year
- An open-source implementaion for fine-tuning Qwen-VL series by Alibaba Cloud.☆1,582Updated last week
- [CVPR 2025] Official PyTorch implementation of "EdgeTAM: On-Device Track Anything Model"☆854Updated last month
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆372Updated 7 months ago
- Official Repo For Pixel-LLM Codebase☆1,492Updated last week
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆568Updated last month
- Eagle: Frontier Vision-Language Models with Data-Centric Strategies☆921Updated 2 months ago
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,921Updated last year
- A flexible and efficient codebase for training visually-conditioned language models (VLMs)☆903Updated last year
- VisionLLM Series☆1,133Updated 10 months ago
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024☆1,622Updated last year