TinyLLaVA / TinyLLaVA_Factory
A Framework of Small-scale Large Multimodal Models
☆568Updated last week
Related projects: ⓘ
- Open-source evaluation toolkit of large vision-language models (LVLMs), support ~100 VLMs, 40+ benchmarks☆1,018Updated this week
- A family of lightweight multimodal models.☆877Updated 2 weeks ago
- 📖 A curated list of resources dedicated to hallucination of multimodal large language models (MLLM).☆363Updated this week
- [NeurIPS 2023] Official implementations of "Cheap and Quick: Efficient Vision-Language Instruction Tuning for Large Language Models"☆500Updated 7 months ago
- Efficient Multimodal Large Language Models: A Survey☆230Updated last month
- LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Images☆298Updated last month
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆682Updated 5 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆441Updated 4 months ago
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Language☆553Updated this week
- Repository for Show-o, One Single Transformer to Unify Multimodal Understanding and Generation.☆797Updated this week
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.☆303Updated 2 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆692Updated 7 months ago
- ☆732Updated 2 months ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆297Updated 5 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆826Updated 3 months ago
- ☆556Updated 7 months ago
- ✨✨Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆365Updated 3 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆740Updated 3 months ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆444Updated last month
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆591Updated last month
- A flexible and efficient codebase for training visually-conditioned language models (VLMs)☆414Updated 2 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆496Updated 2 months ago
- When do we not need larger vision models?☆314Updated last month
- ☆277Updated 7 months ago
- Awesome_Multimodel is a curated GitHub repository that provides a comprehensive collection of resources for Multimodal Large Language Mod…☆247Updated last month
- Cobra: Extending Mamba to Multi-modal Large Language Model for Efficient Inference☆240Updated last month
- [CVPR 2024 Highlight] OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allo…☆255Updated 3 weeks ago
- VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs☆719Updated this week
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆568Updated 2 weeks ago
- Aligning LMMs with Factually Augmented RLHF☆302Updated 10 months ago