NVlabs / describe-anythingView external linksLinks
[ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning
☆1,450Jun 26, 2025Updated 7 months ago
Alternatives and similar repositories for describe-anything
Users that are interested in describe-anything are comparing it to the libraries listed below
Sorting:
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,159Updated this week
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆1,544Jun 14, 2025Updated 8 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,634Updated this week
- Open-source unified multimodal model☆5,654Oct 27, 2025Updated 3 months ago
- The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained mode…☆18,477Dec 25, 2024Updated last year
- MAGI-1: Autoregressive Video Generation at Scale☆3,641Jun 17, 2025Updated 7 months ago
- Solve Visual Understanding with Reinforced VLMs☆5,841Oct 21, 2025Updated 3 months ago
- Official implementation of BLIP3o-Series☆1,635Nov 29, 2025Updated 2 months ago
- Official Repo For Pixel-LLM Codebase☆1,535Jan 23, 2026Updated 3 weeks ago
- ☆4,562Sep 14, 2025Updated 5 months ago
- Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.☆18,273Jan 30, 2026Updated 2 weeks ago
- [NeurIPS 2025] SpatialLM: Training Large Language Models for Structured Indoor Modeling☆4,234Sep 26, 2025Updated 4 months ago
- Official repo and evaluation implementation of VSI-Bench☆670Aug 5, 2025Updated 6 months ago
- Grounded SAM 2: Ground and Track Anything in Videos with Grounding DINO, Florence-2 and SAM 2☆3,265Nov 11, 2025Updated 3 months ago
- Stable Virtual Camera: Generative View Synthesis with Diffusion Models☆1,559Jun 5, 2025Updated 8 months ago
- Scaling Vision Pre-Training to 4K Resolution☆221Jan 4, 2026Updated last month
- Reference PyTorch implementation and models for DINOv3☆9,590Nov 20, 2025Updated 2 months ago
- VILA is a family of state-of-the-art vision language models (VLMs) for diverse multimodal AI tasks across the edge, data center, and clou…☆3,754Nov 28, 2025Updated 2 months ago
- [ICCV'25 Best Paper Finalist] ReCamMaster: Camera-Controlled Generative Rendering from A Single Video☆1,739Nov 28, 2025Updated 2 months ago
- A SOTA open-source image editing model, which aims to provide comparable performance against the closed-source models like GPT-4o and Gem…☆2,137Dec 29, 2025Updated last month
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,806Sep 22, 2025Updated 4 months ago
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,876Jan 8, 2026Updated last month
- [ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"☆9,725Aug 12, 2024Updated last year
- New repo collection for NVIDIA Cosmos: https://github.com/nvidia-cosmos☆8,088Jan 6, 2026Updated last month
- This repository provides the code and model checkpoints for AIMv1 and AIMv2 research projects.☆1,397Aug 4, 2025Updated 6 months ago
- Next-Token Prediction is All You Need☆2,339Jan 12, 2026Updated last month
- Official repository of 'Visual-RFT: Visual Reinforcement Fine-Tuning' & 'Visual-ARFT: Visual Agentic Reinforcement Fine-Tuning'’☆2,319Oct 29, 2025Updated 3 months ago
- Frontier Multimodal Foundation Models for Image and Video Understanding☆1,102Aug 14, 2025Updated 6 months ago
- PyTorch code and models for the DINOv2 self-supervised learning method.☆12,393Dec 22, 2025Updated last month
- [CVPR 2025] Code for Segment Any Motion in Videos☆459Jun 10, 2025Updated 8 months ago
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆1,430Sep 22, 2025Updated 4 months ago
- [CVPR 2025] Prompt Depth Anything☆1,053Jan 29, 2026Updated 2 weeks ago
- Cambrian-1 is a family of multimodal LLMs with a vision-centric design.☆1,985Nov 7, 2025Updated 3 months ago
- LLM2CLIP significantly improves already state-of-the-art CLIP models.☆623Feb 1, 2026Updated 2 weeks ago
- Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and …☆17,397Sep 5, 2024Updated last year
- PE3R: Perception-Efficient 3D Reconstruction. Take 2 - 3 photos with your phone, upload them, wait a few minutes, and then start explorin…☆395Apr 1, 2025Updated 10 months ago
- [CVPR 2025 Best Paper Award] VGGT: Visual Geometry Grounded Transformer☆12,448Oct 11, 2025Updated 4 months ago
- DINO-X: The World's Top-Performing Vision Model for Open-World Object Detection and Understanding☆1,334Jul 23, 2025Updated 6 months ago
- ☆1,048May 14, 2025Updated 9 months ago