magic-research / bubogpt
BuboGPT: Enabling Visual Grounding in Multi-Modal LLMs
☆508Updated last year
Alternatives and similar repositories for bubogpt:
Users that are interested in bubogpt are comparing it to the libraries listed below
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"☆864Updated 3 months ago
- Codes for VPGTrans: Transfer Visual Prompt Generator across LLMs. VL-LLaMA, VL-Vicuna.☆271Updated last year
- [TLLM'23] PandaGPT: One Model To Instruction-Follow Them All☆782Updated last year
- [CVPR 2025] Video Narration as Vocabulary & Video as Long Document☆562Updated last week
- ☆770Updated 7 months ago
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆783Updated 7 months ago
- [ACL 2024] GroundingGPT: Language-Enhanced Multi-modal Grounding Model☆321Updated 4 months ago
- Youku-mPLUG: A 10 Million Large-scale Chinese Video-Language Pre-training Dataset and Benchmarks☆294Updated last year
- Official code for Goldfish model for long video understanding and MiniGPT4-video for short video understanding☆600Updated 3 months ago
- X-LLM: Bootstrapping Advanced Large Language Models by Treating Multi-Modalities as Foreign Languages☆310Updated last year
- ☆772Updated 8 months ago
- MMICL, a state-of-the-art VLM with the in context learning ability from ICL, PKU☆346Updated last year
- 支持中英文双语视觉-文本对话的开源可商用多模态模型。☆368Updated last year
- Official implementation of SEED-LLaMA (ICLR 2024).☆604Updated 6 months ago
- [ICLR 2024 Spotlight] DreamLLM: Synergistic Multimodal Comprehension and Creation☆427Updated 3 months ago
- GPT4Tools is an intelligent system that can automatically decide, control, and utilize different visual foundation models, allowing the u…☆768Updated last year
- An open source implementation of "Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning", an all-new multi modal …☆359Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆600Updated last month
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆733Updated last year
- The official repository of "Video assistant towards large language model makes everything easy"☆220Updated 3 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆569Updated 5 months ago
- Multimodal Models in Real World☆452Updated last month
- [A toolbox for fun.] Transform Image into Unique Paragraph with ChatGPT, BLIP2, OFA, GRIT, Segment Anything, ControlNet.☆804Updated last year
- Emu Series: Generative Multimodal Models from BAAI☆1,695Updated 5 months ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆477Updated 7 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆506Updated 11 months ago
- 🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)☆835Updated 8 months ago
- ☆903Updated last year
- ✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models☆632Updated 3 months ago
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆524Updated 9 months ago