MonolithFoundation / Bumblebee
A Simple MLLM Surpassed QwenVL-Max with OpenSource Data Only in 14B LLM.
☆37Updated 6 months ago
Alternatives and similar repositories for Bumblebee:
Users that are interested in Bumblebee are comparing it to the libraries listed below
- Our 2nd-gen LMM☆33Updated 9 months ago
- ☆29Updated 6 months ago
- ☆56Updated last year
- Chinese CLIP models with SOTA performance.☆52Updated last year
- Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models☆59Updated 4 months ago
- SkyScript-100M: 1,000,000,000 Pairs of Scripts and Shooting Scripts for Short Drama: https://arxiv.org/abs/2408.09333v2☆118Updated 3 months ago
- ☆25Updated 5 months ago
- A light proxy solution for HuggingFace hub.☆46Updated last year
- Empirical Study Towards Building An Effective Multi-Modal Large Language Model☆23Updated last year
- ☆27Updated 10 months ago
- ☆67Updated last year
- Vision Search Assistant: Empower Vision-Language Models as Multimodal Search Engines☆117Updated 4 months ago
- SUS-Chat: Instruction tuning done right☆48Updated last year
- ☆17Updated last year
- Vary-tiny codebase upon LAVIS (for training from scratch)and a PDF image-text pairs data (about 600k including English/Chinese)☆79Updated 5 months ago
- ☆78Updated 10 months ago
- Valley is a cutting-edge multimodal large model designed to handle a variety of tasks involving text, images, and video data.☆219Updated 2 weeks ago
- Synthetic data generation pipelines for text-rich images.☆45Updated last week
- the newest version of llama3,source code explained line by line using Chinese☆22Updated 10 months ago
- 基于baichuan-7b的开源多模态大语言模型☆73Updated last year
- Qwen-WisdomVast is a large model trained on 1 million high-quality Chinese multi-turn SFT data, 200,000 English multi-turn SFT data, and …☆18Updated 11 months ago
- Search, organize, discover anything!☆48Updated 10 months ago
- Here is a demo for PDF parser (Including OCR, object detection tools)☆34Updated 5 months ago
- A Framework for Decoupling and Assessing the Capabilities of VLMs☆40Updated 8 months ago
- MTVQA: Benchmarking Multilingual Text-Centric Visual Question Answering. A comprehensive evaluation of multimodal large model multilingua…☆53Updated 3 months ago
- official code for "Fox: Focus Anywhere for Fine-grained Multi-page Document Understanding"☆139Updated 9 months ago
- LLaVA combines with Magvit Image tokenizer, training MLLM without an Vision Encoder. Unifying image understanding and generation.☆35Updated 8 months ago
- 本项目使用LLaVA 1.6多模态模型实现以文搜图和以图搜图功能。☆19Updated last year