ByteDance-Seed / Seed1.5-VLLinks
Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving state-of-the-art performance on 38 out of 60 public benchmarks.
☆1,539Updated 7 months ago
Alternatives and similar repositories for Seed1.5-VL
Users that are interested in Seed1.5-VL are comparing it to the libraries listed below
Sorting:
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,156Updated 6 months ago
- A novel Multimodal Large Language Model (MLLM) architecture, designed to structurally align visual and textual embeddings.☆1,430Updated 4 months ago
- GLM-4.6V/4.5V/4.1V-Thinking: Towards Versatile Multimodal Reasoning with Scalable Reinforcement Learning☆2,162Updated 2 weeks ago
- Official implementation of BLIP3o-Series☆1,635Updated 2 months ago
- MiMo-VL☆623Updated 5 months ago
- ☆716Updated last week
- ☆1,122Updated 2 months ago
- MMaDA - Open-Sourced Multimodal Large Diffusion Language Models☆1,574Updated 2 months ago
- R1-onevision, a visual language model capable of deep CoT reasoning.☆575Updated 10 months ago
- Next-Token Prediction is All You Need☆2,339Updated last month
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆516Updated 5 months ago
- A fork to add multimodal model training to open-r1☆1,449Updated last year
- [ICCV 2025] Implementation for Describe Anything: Detailed Localized Image and Video Captioning☆1,448Updated 7 months ago
- Fully Open Framework for Democratized Multimodal Training☆718Updated last month
- 🔥🔥First-ever hour scale video understanding models☆611Updated 6 months ago
- Explore the Multimodal “Aha Moment” on 2B Model☆623Updated 10 months ago
- Frontier Multimodal Foundation Models for Image and Video Understanding☆1,102Updated 5 months ago
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆768Updated 5 months ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆816Updated last month
- ☆999Updated 10 months ago
- [ICLR & NeurIPS 2025] Repository for Show-o series, One Single Transformer to Unify Multimodal Understanding and Generation.☆1,876Updated last month
- [ICLR2026] This is the first paper to explore how to effectively use R1-like RL for MLLMs and introduce Vision-R1, a reasoning MLLM that…☆760Updated 2 weeks ago
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆840Updated 8 months ago
- Native Multimodal Models are World Learners☆1,456Updated last month
- [CVPR 2025] VideoWorld is a simple generative model that learns purely from unlabeled videos—much like how babies learn by observing thei…☆666Updated 6 months ago
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆730Updated 2 months ago
- Awesome Unified Multimodal Models☆1,098Updated last week
- Open-source unified multimodal model☆5,654Updated 3 months ago
- ☆985Updated last week
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation☆823Updated 7 months ago