Kwai-Keye / KeyeLinks
☆668Updated last week
Alternatives and similar repositories for Keye
Users that are interested in Keye are comparing it to the libraries listed below
Sorting:
- ☆844Updated last month
- R1-onevision, a visual language model capable of deep CoT reasoning.☆567Updated 5 months ago
- MiMo-VL☆563Updated last month
- 🔥🔥First-ever hour scale video understanding models☆553Updated 2 months ago
- Official implementation of UnifiedReward & [NeurIPS 2025] UnifiedReward-Think☆555Updated last week
- This is the first paper to explore how to effectively use R1-like RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages …☆701Updated 3 weeks ago
- Explore the Multimodal “Aha Moment” on 2B Model☆609Updated 6 months ago
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆271Updated last month
- MM-EUREKA: Exploring the Frontiers of Multimodal Reasoning with Rule-based Reinforcement Learning☆736Updated 3 weeks ago
- Video-R1: Reinforcing Video Reasoning in MLLMs [🔥the first paper to explore R1 for video]☆707Updated 2 weeks ago
- Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM.☆467Updated last week
- Seed1.5-VL, a vision-language foundation model designed to advance general-purpose multimodal understanding and reasoning, achieving stat…☆1,445Updated 3 months ago
- ✨First Open-Source R1-like Video-LLM [2025/02/18]☆367Updated 7 months ago
- Awesome Unified Multimodal Models☆758Updated last month
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Survey☆449Updated 8 months ago
- MM-Eureka V0 also called R1-Multimodal-Journey, Latest version is in MM-Eureka☆319Updated 3 months ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆483Updated last month
- ☆275Updated 2 months ago
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆255Updated last week
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆395Updated 4 months ago
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆569Updated 11 months ago
- This is the official implementation of ICCV 2025 "Flash-VStream: Efficient Real-Time Understanding for Long Video Streams"☆236Updated 2 months ago
- Extend OpenRLHF to support LMM RL training for reproduction of DeepSeek-R1 on multimodal tasks.☆822Updated 4 months ago
- Kimi-VL: Mixture-of-Experts Vision-Language Model for Multimodal Reasoning, Long-Context Understanding, and Strong Agent Capabilities☆1,063Updated 2 months ago
- [ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval☆225Updated 4 months ago
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆251Updated last month
- Multimodal Models in Real World☆543Updated 7 months ago
- Official Code for "Mini-o3: Scaling Up Reasoning Patterns and Interaction Turns for Visual Search"☆322Updated 2 weeks ago
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer☆241Updated last year
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthiness☆414Updated 4 months ago