[ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning
☆1,456Jun 26, 2025Updated 8 months ago
Alternatives and similar repositories for describe-anything
Users that are interested in describe-anything are comparing it to the libraries listed below
Sorting:
- State-of-the-art Image & Video CLIP, Multimodal Large Language Models, and More!☆2,181Feb 11, 2026Updated 3 weeks ago
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆1,551Jun 14, 2025Updated 8 months ago
- Official repository for "AM-RADIO: Reduce All Domains Into One"☆1,682Feb 11, 2026Updated 3 weeks ago
- Open-source unified multimodal model☆5,723Oct 27, 2025Updated 4 months ago
- The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained mode…☆18,610Updated this week
- MAGI-1: Autoregressive Video Generation at Scale☆3,647Jun 17, 2025Updated 8 months ago
- Solve Visual Understanding with Reinforced VLMs☆5,855Oct 21, 2025Updated 4 months ago
- Official implementation of BLIP3o-Series☆1,642Nov 29, 2025Updated 3 months ago
- Official Repo For Pixel-LLM Codebase☆1,550Feb 27, 2026Updated last week
- ☆4,582Sep 14, 2025Updated 5 months ago
- Qwen3-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.