bytedance / ValleyLinks
Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.
☆269Updated 2 weeks ago
Alternatives and similar repositories for Valley
Users that are interested in Valley are comparing it to the libraries listed below
Sorting:
- ☆187Updated 11 months ago
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆283Updated 4 months ago
- [ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval☆241Updated 2 months ago
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆129Updated last year
- ☆79Updated last year
- Image Textualization: An Automatic Framework for Generating Rich and Detailed Image Descriptions (NeurIPS 2024)☆172Updated last year
- [ACL2025 Findings] Migician: Revealing the Magic of Free-Form Multi-Image Grounding in Multimodal Large Language Models☆90Updated 8 months ago
- MuLan: Adapting Multilingual Diffusion Models for 110+ Languages (无需额外训练为任意扩散模型支持多语言能力)☆146Updated last year
- Research Code for Multimodal-Cognition Team in Ant Group☆172Updated 3 months ago
- GLM Series Edge Models☆157Updated 7 months ago
- 🧠 VideoMind: A Chain-of-LoRA Agent for Temporal-Grounded Video Reasoning (ICLR 2026)☆301Updated last week
- ☆147Updated 6 months ago
- Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM.☆575Updated 3 months ago
- ☆242Updated 11 months ago
- ☆714Updated 2 months ago
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizer☆248Updated last year
- Official GPU implementation of the paper "PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance"☆131Updated last year
- Offical Code for GPT4Video: A Unified Multimodal Large Language Model for lnstruction-Followed Understanding and Safety-Aware Generation☆144Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆46Updated 4 months ago
- A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.☆38Updated last year
- ☆194Updated last month
- mllm-npu: training multimodal large language models on Ascend NPUs☆95Updated last year
- The official code for NeurIPS 2024 paper: Harmonizing Visual Text Comprehension and Generation☆129Updated last year
- Multimodal Models in Real World☆554Updated 11 months ago
- ☆41Updated last year
- SkyScript-100M: 1,000,000,000 Pairs of Scripts and Shooting Scripts for Short Drama: https://arxiv.org/abs/2408.09333v2☆134Updated last year
- 🔥🔥First-ever hour scale video understanding models☆607Updated 6 months ago
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model☆342Updated last year
- official code for "Fox: Focus Anywhere for Fine-grained Multi-page Document Understanding"☆195Updated last year
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆65Updated last year