ByungKwanLee / MoAI
[ECCV 2024] Official PyTorch implementation code for realizing the technical part of Mixture of All Intelligence (MoAI) to improve performance of numerous zero-shot vision language tasks.
☆320Updated last year
Alternatives and similar repositories for MoAI:
Users that are interested in MoAI are comparing it to the libraries listed below
- [ACL 2024 Findings] Official PyTorch Implementation code for realizing the technical part of CoLLaVO: Crayon Large Language and Vision mO…☆95Updated 9 months ago
- [EMNLP 2024] Official PyTorch implementation code for realizing the technical part of Traversal of Layers (TroL) presenting new propagati…☆96Updated 9 months ago
- [NeurIPS 2024] Official PyTorch implementation code for realizing the technical part of Mamba-based traversal of rationale (Meteor) to im…☆112Updated 10 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆371Updated last week
- Rethinking Step-by-step Visual Reasoning in LLMs☆285Updated 2 months ago
- a family of highly capabale yet efficient large multimodal models☆178Updated 7 months ago
- An open-source implementaion for fine-tuning Phi3-Vision and Phi3.5-Vision by Microsoft.☆92Updated last week
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆234Updated 7 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆735Updated last year
- Eagle Family: Exploring Model Designs, Data Recipes and Training Strategies for Frontier-Class Multimodal LLMs☆645Updated 2 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆317Updated 8 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆238Updated 3 months ago
- When do we not need larger vision models?☆383Updated 2 months ago
- PG-Video-LLaVA: Pixel Grounding in Large Multimodal Video Models☆257Updated last year
- Official implementation of project Honeybee (CVPR 2024)☆446Updated 10 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆499Updated 2 weeks ago
- An open-source implementaion for fine-tuning Llama3.2-Vision series by Meta.☆146Updated last week
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆525Updated 9 months ago
- LLaVA-Interactive-Demo☆368Updated 8 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆577Updated last year
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"☆266Updated 9 months ago
- ☆170Updated 5 months ago
- Implementation of PALI3 from the paper PALI-3 VISION LANGUAGE MODELS: SMALLER, FASTER, STRONGER"☆145Updated this week
- [NeurIPS 2024] MoVA: Adapting Mixture of Vision Experts to Multimodal Context☆150Updated 6 months ago
- Python Library to evaluate VLM models' robustness across diverse benchmarks☆200Updated last week
- Official implementation of the Law of Vision Representation in MLLMs☆153Updated 4 months ago
- The official repo for the paper "VeCLIP: Improving CLIP Training via Visual-enriched Captions"☆242Updated 2 months ago
- LongLLaVA: Scaling Multi-modal LLMs to 1000 Images Efficiently via Hybrid Architecture☆201Updated 3 months ago
- Quick exploration into fine tuning florence 2☆305Updated 6 months ago
- HPT - Open Multimodal LLMs from HyperGAI☆314Updated 10 months ago