OpenGVLab / VisionLLM
VisionLLM Series
β903Updated 3 weeks ago
Related projects β
Alternatives and complementary repositories for VisionLLM
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β777Updated 5 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skillsβ703Updated 9 months ago
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interestβ506Updated 4 months ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of β¦β457Updated 3 months ago
- β743Updated 4 months ago
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Languageβ583Updated 2 weeks ago
- API for Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Seriesβ770Updated 3 months ago
- A family of lightweight multimodal models.β928Updated 2 weeks ago
- [CVPR 2024] Aligning and Prompting Everything All at Once for Universal Visual Perceptionβ486Updated 6 months ago
- Recent LLM-based CV and related works. Welcome to comment/contribute!β837Updated 5 months ago
- γICLR 2024π₯γ Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignmentβ717Updated 7 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imagβ¦β464Updated 6 months ago
- β458Updated this week
- Open-source evaluation toolkit of large vision-language models (LVLMs), support 160+ VLMs, 50+ benchmarksβ1,305Updated this week
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Wantβ691Updated 3 months ago
- [ECCV 2024] Tokenize Anything via Promptingβ521Updated 4 months ago
- A Framework of Small-scale Large Multimodal Modelsβ635Updated 3 weeks ago
- [ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"β653Updated 9 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"β666Updated 2 months ago
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understandingβ524Updated last week
- Official implementation of SEED-LLaMA (ICLR 2024).β574Updated last month
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)β728Updated 3 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"β523Updated 10 months ago
- Official repository for the paper PLLaVAβ581Updated 3 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expertβ¦β1,245Updated 2 weeks ago
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''β1,186Updated 7 months ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasksβ365Updated 4 months ago
- LLaVA-UHD: an LMM Perceiving Any Aspect Ratio and High-Resolution Imagesβ318Updated last month
- [ECCV2024] Video Foundation Models & Data for Multimodal Understandingβ1,402Updated last month
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Promptsβ294Updated 3 months ago