westlake-baichuan-mllm / bc-omniLinks
Baichuan-Omni: Towards Capable Open-source Omni-modal LLM π
β267Updated 6 months ago
Alternatives and similar repositories for bc-omni
Users that are interested in bc-omni are comparing it to the libraries listed below
Sorting:
- β164Updated 5 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthinessβ392Updated 2 months ago
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learningβ303Updated 2 months ago
- MMR1: Advancing the Frontiers of Multimodal Reasoningβ162Updated 4 months ago
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resourcesβ242Updated 2 months ago
- β¨β¨Freeze-Omni: A Smart and Low Latency Speech-to-speech Dialogue Model with Frozen LLMβ333Updated 2 months ago
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)β59Updated 4 months ago
- β173Updated 5 months ago
- OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Alignment and Reaβ¦β91Updated last month
- β235Updated 5 months ago
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Textβ382Updated 2 months ago
- X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languagesβ312Updated last year
- β196Updated 3 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Modelβ267Updated last year
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.β245Updated 5 months ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Schemeβ138Updated 3 months ago
- π€ R1-AQA Model: mispeech/r1-aqaβ281Updated 4 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architectureβ207Updated 6 months ago
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizerβ230Updated last year
- [ICCV'25] Explore the Limits of Omni-modal Pretraining at Scaleβ111Updated 11 months ago
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search tooβ¦β268Updated last month
- β29Updated 11 months ago
- This is for ACL 2025 Findings Paper: From Specific-MLLMs to Omni-MLLMs: A Survey on MLLMs Aligned with Multi-modalitiesModelsβ44Updated last week
- Long Context Transfer from Language to Visionβ385Updated 4 months ago
- An automated pipeline for evaluating LLMs for role-playing.β192Updated 10 months ago
- Scaling Preference Data Curation via Human-AI Synergyβ94Updated 3 weeks ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedbackβ287Updated 10 months ago
- MiMo-VLβ469Updated last week
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"β167Updated 4 months ago
- HPT - Open Multimodal LLMs from HyperGAIβ315Updated last year