ByungKwanLee / MoAI
[ECCV 2024] Official PyTorch implementation code for realizing the technical part of Mixture of All Intelligence (MoAI) to improve performance of numerous zero-shot vision language tasks.
☆315Updated 10 months ago
Alternatives and similar repositories for MoAI:
Users that are interested in MoAI are comparing it to the libraries listed below
- [EMNLP 2024] Official PyTorch implementation code for realizing the technical part of Traversal of Layers (TroL) presenting new propagati…☆89Updated 7 months ago
- [ACL 2024 Findings] Official PyTorch Implementation code for realizing the technical part of CoLLaVO: Crayon Large Language and Vision mO…☆94Updated 7 months ago
- LLaVA-HR: High-Resolution Large Language-Vision Assistant☆223Updated 5 months ago
- [NeurIPS 2024] Official PyTorch implementation code for realizing the technical part of Mamba-based traversal of rationale (Meteor) to im…☆107Updated 8 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆224Updated last month
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Feature Pyramid via Hierarchical Window Transformer☆354Updated 2 weeks ago
- a family of highly capabale yet efficient large multimodal models☆176Updated 5 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆308Updated 6 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆452Updated last week
- Rethinking Step-by-step Visual Reasoning in LLMs☆220Updated this week
- Eagle Family: Exploring Model Designs, Data Recipes and Training Strategies for Frontier-Class Multimodal LLMs☆572Updated this week
- ☆156Updated 3 months ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆238Updated last week
- When do we not need larger vision models?☆357Updated last month
- CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts☆138Updated 7 months ago
- This is the official repository of our paper "What If We Recaption Billions of Web Images with LLaMA-3 ?"☆128Updated 7 months ago
- Official implementation of project Honeybee (CVPR 2024)☆441Updated 8 months ago
- An open-source implementaion for fine-tuning Phi3-Vision and Phi3.5-Vision by Microsoft.☆81Updated this week
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆184Updated last month
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆126Updated this week
- [NeurIPS'24 Spotlight] EVE: Encoder-Free Vision-Language Models☆268Updated 3 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆721Updated 11 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆188Updated 3 weeks ago
- ✨✨Beyond LLaVA-HD: Diving into High-Resolution Large Multimodal Models☆147Updated last month
- Code for the Molmo Vision-Language Model☆258Updated last month
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆249Updated last year
- The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss". A su…☆220Updated 2 weeks ago
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆251Updated 5 months ago
- LLaVA-Interactive-Demo☆361Updated 6 months ago