baidubce / Qianfan-VLLinks
Qianfan-VL: Domain-Enhanced Universal Vision-Language Models
☆180Updated 4 months ago
Alternatives and similar repositories for Qianfan-VL
Users that are interested in Qianfan-VL are comparing it to the libraries listed below
Sorting:
- [ACL 2025 Oral] 🔥🔥 MegaPairs: Massive Data Synthesis for Universal Multimodal Retrieval☆241Updated 2 months ago
- ☆187Updated 11 months ago
- Repo for "VRAG-RL: Empower Vision-Perception-Based RAG for Visually Rich Information Understanding via Iterative Reasoning with Reinforce…☆436Updated 3 weeks ago
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.☆283Updated 4 months ago
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆269Updated 2 weeks ago
- Ling is a MoE LLM provided and open-sourced by InclusionAI.☆238Updated 8 months ago
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆65Updated last year
- ☆114Updated 3 weeks ago
- a toolkit on knowledge distillation for large language models☆261Updated last month
- Research Code for Multimodal-Cognition Team in Ant Group☆172Updated 3 months ago
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learning☆330Updated 8 months ago
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search too…☆387Updated 5 months ago
- ZO2 (Zeroth-Order Offloading): Full Parameter Fine-Tuning 175B LLMs with 18GB GPU Memory [COLM2025]☆199Updated 6 months ago
- 训练一个对中文支持更好的LLaVA模型,并开源训练代码和数据。☆79Updated last year
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resources☆305Updated 5 months ago
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resources☆214Updated 4 months ago
- [ACM MM 2025] The official code of "Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs"☆103Updated last month
- PDF Parsing Tool: GOT's vLLM acceleration implementation, MinerU for layout recognition, and GOT for table formula parsing.☆65Updated last year
- Repo for Benchmarking Multimodal Retrieval Augmented Generation with Dynamic VQA Dataset and Self-adaptive Planning Agent☆412Updated 9 months ago
- A Survey of Multimodal Retrieval-Augmented Generation☆20Updated 3 months ago
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆272Updated last year
- official code for "Fox: Focus Anywhere for Fine-grained Multi-page Document Understanding"☆195Updated last year
- ☆41Updated last year
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆154Updated last month
- 2025.01:从零到一实现了一个多模态大模型,并命名为Reyes(睿视),R:睿,eyes:眼。Reyes的参数量为8B,视觉编码器使用的是InternViT-300M-448px-V2_5,语言模型侧使用的是Qwen2.5-7B-Instruct,Reyes也通过一个两…☆29Updated this week
- a-m-team's exploration in large language modeling☆195Updated 8 months ago
- OpenJudge: A Unified Framework for Holistic Evaluation and Quality Rewards☆355Updated this week
- ☆520Updated last month
- Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement fine-tuning (RFT) of large language models (…☆510Updated this week
- Dataset and Code for our ACL 2024 paper: "Multimodal Table Understanding". We propose the first large-scale Multimodal IFT and Pre-Train …☆223Updated 7 months ago