phellonchen / X-LLMLinks
X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages
☆312Updated last year
Alternatives and similar repositories for X-LLM
Users that are interested in X-LLM are comparing it to the libraries listed below
Sorting:
- Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks☆296Updated last year
- Codes for VPGTrans: Transfer Visual Prompt Generator across LLMs. VL-LLaMA, VL-Vicuna.☆273Updated last year
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆352Updated last year
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆520Updated last year
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆374Updated 2 months ago
- Baichuan-Omni: Towards Capable Open-source Omni-modal LLM 🌊☆267Updated 5 months ago
- TaiSu(太素)--a large-scale Chinese multimodal dataset(亿级大规模中文视觉语言预训练数据集)☆189Updated last year
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model☆332Updated 8 months ago
- The official repository of "Video assistant towards large language model makes everything easy"☆227Updated 6 months ago
- Research Code for Multimodal-Cognition Team in Ant Group☆153Updated last month
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆343Updated 5 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆266Updated last year
- 支持中英文双语视觉-文本对话的开源可商用多模态模型。☆373Updated last year
- BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs☆510Updated last year
- ☆783Updated last year
- mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video (ICML 2023)☆227Updated last year
- VLE: Vision-Language Encoder (VLE: 视觉-语言多模态预训练模型)☆194Updated 2 years ago
- paper: https://arxiv.org/abs/2307.02469 page: https://lynx-llm.github.io/☆268Updated last year
- [TMLR23] Official implementation of UnIVAL: Unified Model for Image, Video, Audio and Language Tasks.☆228Updated last year
- [CVPR'24] RLHF-V: Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback☆283Updated 10 months ago
- ☆87Updated last year
- mPLUG: Effective and Efficient Vision-Language Learning by Cross-modal Skip-connections. (EMNLP 2022)☆94Updated 2 years ago
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆267Updated last year
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆861Updated 2 months ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆316Updated last year
- [TLLM'23] PandaGPT: One Model To Instruction-Follow Them All☆800Updated 2 years ago
- Aligning LMMs with Factually Augmented RLHF☆368Updated last year
- The huggingface implementation of Fine-grained Late-interaction Multi-modal Retriever.☆92Updated last month
- [MIR-2023-Survey] A continuously updated paper list for multi-modal pre-trained big models☆287Updated 4 months ago
- Multimodal chatbot with computer vision capabilities integrated, our 1st-gen LMM☆101Updated last year