OpenGVLab / VisionLLMLinks
VisionLLM Series
☆1,119Updated 8 months ago
Alternatives and similar repositories for VisionLLM
Users that are interested in VisionLLM are comparing it to the libraries listed below
Sorting:
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆922Updated 2 months ago
- ☆797Updated last year
- GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest☆548Updated 4 months ago
- LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills☆758Updated last year
- 【ICLR 2024🔥】 Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment☆838Updated last year
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆499Updated last year
- A family of lightweight multimodal models.☆1,045Updated 11 months ago
- A Framework of Small-scale Large Multimodal Models☆911Updated 6 months ago
- [CVPR 2024] OneLLM: One Framework to Align All Modalities with Language☆656Updated last year
- Recent LLM-based CV and related works. Welcome to comment/contribute!☆873Updated 7 months ago
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆572Updated last year
- LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models (ECCV 2024)☆843Updated last year
- [CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding☆659Updated 9 months ago
- [ECCV 2024] Tokenize Anything via Prompting☆595Updated 10 months ago
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆845Updated 3 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆681Updated last year
- Grounding DINO 1.5: IDEA Research's Most Capable Open-World Object Detection Model Series☆1,044Updated 9 months ago
- Emu Series: Generative Multimodal Models from BAAI☆1,746Updated last year
- LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.☆554Updated 3 months ago
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆388Updated 6 months ago
- Chatbot Arena meets multi-modality! Multi-Modality Arena allows you to benchmark vision-language models side-by-side while providing imag…☆543Updated last year
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,339Updated last year
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"☆2,458Updated 8 months ago
- Strong and Open Vision Language Assistant for Mobile Devices☆1,285Updated last year
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆389Updated last year
- [ECCV2024] Video Foundation Models & Data for Multimodal Understanding☆2,083Updated 2 months ago
- ☆628Updated last year
- ✨✨Woodpecker: Hallucination Correction for Multimodal Large Language Models☆638Updated 10 months ago
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆670Updated 2 months ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,694Updated last month