OpenGVLab / VisionLLM
VisionLLM Series
β1,002Updated 2 weeks ago
Alternatives and similar repositories for VisionLLM:
Users that are interested in VisionLLM are comparing it to the libraries listed below
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β828Updated 2 months ago
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interestβ522Updated 8 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ722Updated last year
- β765Updated 7 months ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of β¦β476Updated 6 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Wantβ777Updated 6 months ago
- [ECCV 2024] Tokenize Anything via Promptingβ559Updated 2 months ago
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Seriesβ897Updated last month
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imagβ¦β494Updated 10 months ago
- β498Updated 3 months ago
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Languageβ617Updated 4 months ago
- LLaVA-Interactive-Demoβ362Updated 6 months ago
- When do we not need larger vision models?β368Updated last week
- β599Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ588Updated 3 weeks ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!β853Updated 8 months ago
- A family of lightweight multimodal models.β987Updated 3 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"β568Updated last year
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Promptsβ313Updated 7 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,352Updated 2 months ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasksβ381Updated 7 months ago
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perceptionβ516Updated 9 months ago
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.β470Updated last month
- γICLR 2024π₯γ Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignmentβ784Updated 10 months ago
- β¨β¨Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysisβ461Updated 2 months ago
- A Framework of Small-scale Large Multimodal Modelsβ745Updated 3 weeks ago
- [ECCV2024] Video Foundation Models & Data for Multimodal Understandingβ1,681Updated last week
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editingβ484Updated 4 months ago
- (CVPR2024)A benchmark for evaluating Multimodal LLMs using multiple-choice questions.β328Updated last month
- Official implementation of SEED-LLaMA (ICLR 2024).β596Updated 5 months ago