bytedance / ValleyLinks
Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.
☆252Updated 2 months ago
Alternatives and similar repositories for Valley
Users that are interested in Valley are comparing it to the libraries listed below
Sorting:
- ☆186Updated 8 months ago
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆260Updated 3 weeks ago
- [ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval☆228Updated 5 months ago
- ☆79Updated last year
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆126Updated 11 months ago
- Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM.☆480Updated 3 weeks ago
- MuLan: Adapting Multilingual Diffusion Models for 110+ Languages (无需额外训练为任意扩散模型支持多语言能力)☆140Updated 9 months ago
- GLM Series Edge Models☆149Updated 4 months ago
- ☆241Updated 8 months ago
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)☆167Updated last year
- The official code for NeurIPS 2024 paper: Harmonizing Visual Text Comprehension and Generation☆129Updated 11 months ago
- ☆183Updated 2 months ago
- 💡 VideoMind: A Chain-of-LoRA Agent for Long Video Reasoning☆270Updated last week
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer☆243Updated last year
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆78Updated 5 months ago
- Offical Code for GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation☆142Updated 11 months ago
- Multimodal Models in Real World☆548Updated 8 months ago
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆38Updated last year
- [ICML 2025] Official PyTorch implementation of LongVU☆403Updated 5 months ago
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆269Updated 8 months ago
- Official GPU implementation of the paper "PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance"☆130Updated 11 months ago
- ☆681Updated last month
- ☆39Updated last year
- Research Code for Multimodal-Cognition Team in Ant Group☆167Updated last week
- Repository for 23'MM accepted paper "Curriculum-Listener: Consistency- and Complementarity-Aware Audio-Enhanced Temporal Sentence Groundi…☆51Updated last year
- Our 2nd-gen LMM☆34Updated last year