westlake-baichuan-mllm / bc-omniLinks
Baichuan-Omni: Towards Capable Open-source Omni-modal LLM π
β268Updated 4 months ago
Alternatives and similar repositories for bc-omni
Users that are interested in bc-omni are comparing it to the libraries listed below
Sorting:
- β158Updated 3 months ago
- [CVPR'25 highlight] RLAIF-V: Open-Source AI Feedback Leads to Super GPT-4V Trustworthinessβ365Updated 2 weeks ago
- MMR1: Advancing the Frontiers of Multimodal Reasoningβ159Updated 2 months ago
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Textβ356Updated 3 weeks ago
- β¨β¨Freeze-Omni: A Smart and Low Latency Speech-to-speech Dialogue Model with Frozen LLMβ319Updated this week
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedbackβ278Updated 8 months ago
- Open-Qwen2VL: Compute-Efficient Pre-Training of Fully-Open Multimodal LLMs on Academic Resourcesβ216Updated 2 weeks ago
- The official repo of One RL to See Them All: Visual Triple Unified Reinforcement Learningβ230Updated this week
- Long Context Transfer from Language to Visionβ375Updated 2 months ago
- β¨β¨R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learningβ133Updated 3 weeks ago
- Rethinking RL Scaling for Vision Language Models: A Transparent, From-Scratch Framework and Comprehensive Evaluation Schemeβ127Updated last month
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architectureβ203Updated 4 months ago
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizerβ225Updated last year
- β123Updated this week
- The official repository for "2.5 Years in Class: A Multimodal Textbook for Vision-Language Pretraining"β156Updated 2 months ago
- β230Updated 3 months ago
- An easy-to-use, fast, and easily integrable tool for evaluating audio LLMβ102Updated last week
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Modelβ261Updated 11 months ago
- β177Updated last week
- β188Updated last month
- An Easy-to-use, Scalable and High-performance RLHF Framework designed for Multimodal Models.β125Updated last month
- Explore the Multimodal βAha Momentβ on 2B Modelβ589Updated 2 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMsβ166Updated last week
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR25]β233Updated 2 months ago
- β194Updated 10 months ago
- π₯π₯MLVU: Multi-task Long Video Understanding Benchmarkβ200Updated 2 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.β182Updated this week
- π€ R1-AQA Model: mispeech/r1-aqaβ260Updated 2 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformerβ376Updated last month
- [Survey] Next Token Prediction Towards Multimodal Intelligence: A Comprehensive Surveyβ443Updated 4 months ago