facebookresearch / perception_modelsLinks
State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!
☆1,431Updated this week
Alternatives and similar repositories for perception_models
Users that are interested in perception_models are comparing it to the libraries listed below
Sorting:
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,241Updated 2 weeks ago
- Code for the Molmo Vision-Language Model☆557Updated 7 months ago
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,262Updated 3 weeks ago
- PyTorch code and models for VJEPA2 self-supervised learning from video.☆1,881Updated 2 weeks ago
- A suite of image and video neural tokenizers☆1,645Updated 5 months ago
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,326Updated 2 months ago
- 4M: Massively Multimodal Masked Modeling☆1,748Updated last month
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆983Updated 5 months ago
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,117Updated 3 weeks ago
- Frontier Multimodal Foundation Models for Image and Video Understanding☆899Updated last month
- [ICCV 2025] OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision Encoders for Multimodal Learning☆284Updated 2 months ago
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆316Updated last month
- Efficient Track Anything☆586Updated 6 months ago
- Famous Vision Language Models and Their Architectures☆927Updated 4 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,474Updated last week
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆531Updated 2 weeks ago
- This repository is a curated collection of the most exciting and influential CVPR 2024 papers. 🔥 [Paper + Code + Demo]☆734Updated last month
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆1,003Updated last year
- Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024☆1,543Updated last year
- Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation☆1,805Updated 11 months ago
- This repository contains the official implementation of the research paper, "MobileCLIP: Fast Image-Text Models through Multi-Modal Reinf…☆988Updated 7 months ago
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆2,431Updated last month
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆894Updated last month
- [ICLR 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,587Updated this week
- Compose multimodal datasets 🎹☆438Updated last month
- About This repository is a curated collection of the most exciting and influential CVPR 2025 papers. 🔥 [Paper + Code + Demo]☆705Updated last month
- Cosmos-Reason1 models understand the physical common sense and generate appropriate embodied decisions in natural language through long c…☆549Updated last week
- This repo contains the code for the paper "Intuitive physics understanding emerges fromself-supervised pretraining on natural videos"☆169Updated 5 months ago
- [NeurIPS 2024] Code release for "Segment Anything without Supervision"☆476Updated 3 weeks ago
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,004Updated last month