SHI-Labs / CuMo
CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts
☆134Updated 5 months ago
Related projects ⓘ
Alternatives and complementary repositories for CuMo
- Public code repo for paper "A Single Transformer for Scalable Vision-Language Modeling"☆113Updated last month
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆194Updated 8 months ago
- ☆131Updated 10 months ago
- Official code for Paper "Mantis: Multi-Image Instruction Tuning"☆179Updated last week
- 【NeurIPS 2024】Dense Connector for MLLMs☆133Updated 3 weeks ago
- [NeurIPS'24 Spotlight] EVE: Encoder-Free Vision-Language Models☆227Updated last month
- LLaVA-HR: High-Resolution Large Language-Vision Assistant☆213Updated 2 months ago
- Official repo for StableLLAVA☆90Updated 10 months ago
- This repo contains evaluation code for the paper "BLINK: Multimodal Large Language Models Can See but Not Perceive". https://arxiv.or…☆107Updated 4 months ago
- Densely Captioned Images (DCI) dataset repository.☆158Updated 4 months ago
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆130Updated last month
- [NeurIPS 2024] This repo contains evaluation code for the paper "Are We on the Right Way for Evaluating Large Vision-Language Models"☆145Updated last month
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆144Updated this week
- This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆120Updated 4 months ago
- Matryoshka Multimodal Models☆81Updated last month
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆105Updated last week
- This repo contains the code and data for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks"☆59Updated this week
- ☆56Updated 9 months ago
- Harnessing 1.4M GPT4V-synthesized Data for A Lite Vision-Language Model☆244Updated 4 months ago
- Official implementation of the Law of Vision Representation in MLLMs☆128Updated 2 months ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆137Updated this week
- SVIT: Scaling up Visual Instruction Tuning☆163Updated 4 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆175Updated 3 weeks ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆167Updated 3 months ago
- ☆103Updated 3 months ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆115Updated last month
- ☆119Updated last month
- ☆121Updated last week
- Official repository of MMDU dataset☆74Updated last month
- LLaVA-PruMerge: Adaptive Token Reduction for Efficient Large Multimodal Models☆98Updated 5 months ago