westlake-baichuan-mllm / bc-omniLinks
Baichuan-Omni: Towards Capable Open-source Omni-modal LLM ๐
โ272Updated 11 months ago
Alternatives and similar repositories for bc-omni
Users that are interested in bc-omni are comparing it to the libraries listed below
Sorting:
- โ183Updated 10 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthinessโ434Updated 7 months ago
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learningโ329Updated 6 months ago
- [Preprint] On the Generalization of SFT: A Reinforcement Learning Perspective with Reward Rectification.โ513Updated last month
- Official PyTorch implementation of EMOVA in CVPR 2025 (https://arxiv.org/abs/2409.18042)โ75Updated 9 months ago
- โจโจFreeze-Omni: A Smart and Low Latency Speech-to-speech Dialogue Model with Frozen LLMโ358Updated 7 months ago
- MMR1: Enhancing Multimodal Reasoning with Variance-Aware Sampling and Open Resourcesโ211Updated 3 months ago
- โ241Updated 10 months ago
- โ187Updated 10 months ago
- Ming - facilitating advanced multimodal understanding and generation capabilities built upon the Ling LLM.โ558Updated last month
- (NIPS 2025) OpenOmni: Official implementation of Advancing Open-Source Omnimodal Large Language Models with Progressive Multimodal Alignโฆโ117Updated last month
- [COLM 2025] Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resourcesโ294Updated 4 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Modelโ278Updated last year
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Textโ409Updated 7 months ago
- Long Context Transfer from Language to Visionโ399Updated 9 months ago
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedbackโ299Updated last year
- ๐ค R1-AQA Model: mispeech/r1-aqaโ311Updated 8 months ago
- The official repository of the dots.vlm1 instruct models proposed by rednote-hilab.โ276Updated 3 months ago
- โ208Updated 2 months ago
- [ICCV 2025] Explore the Limits of Omni-modal Pretraining at Scaleโ121Updated last year
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizerโ248Updated last year
- a toolkit on knowledge distillation for large language modelsโ223Updated last week
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architectureโ212Updated 11 months ago
- โ29Updated last year
- X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languagesโ315Updated 2 years ago
- Your faithful, impartial partner for audio evaluation โ know yourself and your rivals.็ๅฎ่ฏๆต๏ผ็ฅๅทฑ็ฅๅฝผใโ187Updated 3 weeks ago
- [ICCV 2025 Highlight] The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"โ182Updated 9 months ago
- MMSearch-R1 is an end-to-end RL framework that enables LMMs to perform on-demand, multi-turn search with real-world multimodal search tooโฆโ368Updated 4 months ago
- A project for tri-modal LLM benchmarking and instruction tuning.โ53Updated 9 months ago
- โ75Updated last year