westlake-baichuan-mllm / bc-omni
Baichuan-Omni: Towards Capable Open-source Omni-modal LLM π
β230Updated last week
Related projects β
Alternatives and complementary repositories for bc-omni
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Modelβ244Updated 4 months ago
- Long Context Transfer from Language to Visionβ328Updated 2 weeks ago
- RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthinessβ233Updated last week
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedbackβ230Updated 2 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architectureβ178Updated last month
- β152Updated 4 months ago
- β215Updated 3 months ago
- MM-Interleaved: Interleaved Image-Text Generative Modeling via Multi-modal Feature Synchronizerβ197Updated 7 months ago
- β73Updated 8 months ago
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Languaβ¦β271Updated 3 months ago
- LongQLoRA: Extent Context Length of LLMs Efficientlyβ160Updated last year
- OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Textβ270Updated 2 weeks ago
- β67Updated 2 weeks ago
- An automated pipeline for evaluating LLMs for role-playing.β135Updated last month
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"β190Updated 3 weeks ago
- β188Updated 2 weeks ago
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Modelsβ51Updated last week
- HPT - Open Multimodal LLMs from HyperGAIβ312Updated 5 months ago
- This repo contains the code and data for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks"β62Updated last week
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMsβ216Updated 6 months ago
- LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Imagesβ318Updated last month
- SpeechAgents: Human-Communication Simulation with Multi-Modal Multi-Agent Systemsβ76Updated 10 months ago
- [NeurIPS 2024] Needle In A Multimodal Haystack (MM-NIAH): A comprehensive benchmark designed to systematically evaluate the capability ofβ¦β98Updated 3 weeks ago
- [ACL2024] T-Eval: Evaluating Tool Utilization Capability of Large Language Models Step by Stepβ230Updated 7 months ago
- Efficient Multimodal Large Language Models: A Surveyβ270Updated 2 months ago
- β128Updated last week
- π₯π₯First-ever hour scale video understanding modelsβ156Updated 2 weeks ago
- Dataset and Code for our ACL 2024 paper: "Multimodal Table Understanding". We propose the first large-scale Multimodal IFT and Pre-Train β¦β160Updated last month
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,β¦β173Updated 3 weeks ago
- X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languagesβ305Updated last year