Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models
β209Jan 8, 2025Updated last year
Alternatives and similar repositories for COMM
Users that are interested in COMM are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- β19Dec 6, 2023Updated 2 years ago
- [CVPR 2024 π₯] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses thaβ¦β949Aug 5, 2025Updated 7 months ago
- Official Implementation of ICCV 2023 Paper - SegPrompt: Boosting Open-World Segmentation via Category-level Prompt Learningβ111May 28, 2025Updated 9 months ago
- [TMM 2023] Self-paced Curriculum Adapting of CLIP for Visual Grounding.β132Nov 10, 2025Updated 4 months ago
- This repo contains the code for our paper Towards Open-Ended Visual Recognition with Large Language Modelβ100Jul 15, 2024Updated last year
- β134Dec 22, 2023Updated 2 years ago
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scaleβ214Feb 27, 2024Updated 2 years ago
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"β259May 3, 2024Updated last year
- Official implementation of 'CLIP-DINOiser: Teaching CLIP a few DINO tricks' paper.β277Oct 26, 2024Updated last year
- VisionLLM Seriesβ1,139Feb 27, 2025Updated last year
- Official implementation of TagAlignβ37Dec 11, 2024Updated last year
- β59Aug 7, 2023Updated 2 years ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of β¦β506Aug 9, 2024Updated last year
- β425Jul 29, 2024Updated last year
- Recognize Any Regionsβ123Dec 18, 2024Updated last year
- β806Jul 8, 2024Updated last year
- Code/Data for the paper: "LLaVAR: Enhanced Visual Instruction Tuning for Text-Rich Image Understanding"β269Jun 12, 2024Updated last year
- β91Nov 25, 2023Updated 2 years ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perceptionβ159Dec 6, 2024Updated last year
- [ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"β749Jan 22, 2024Updated 2 years ago
- Emu Series: Generative Multimodal Models from BAAIβ1,772Jan 12, 2026Updated 2 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Promptsβ336Jul 17, 2024Updated last year
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024β1,826Nov 27, 2025Updated 3 months ago
- EVA Series: Visual Representation Fantasies from BAAIβ2,655Aug 1, 2024Updated last year
- [CVPR 2024] Official implementation of the paper "Visual In-context Learning"β531Apr 8, 2024Updated last year
- A collection of visual instruction tuning datasets.β77Mar 14, 2024Updated 2 years ago
- [NeurIPS2023] Code release for "Hierarchical Open-vocabulary Universal Image Segmentation"β295Jun 19, 2025Updated 9 months ago
- [EMNLP 2023] TESTA: Temporal-Spatial Token Aggregation for Long-form Video-Language Understandingβ49Jan 9, 2024Updated 2 years ago
- Project Page for "LISA: Reasoning Segmentation via Large Language Model"β2,606Feb 16, 2025Updated last year
- Official implementation of paper "MiniGPT-5: Interleaved Vision-and-Language Generation via Generative Vokens"β862May 8, 2025Updated 10 months ago
- [ECCV 2024] Tokenize Anything via Promptingβ602Dec 11, 2024Updated last year
- A DETR-style framework for open-vocabulary detection (OVD). CVPR 2023β199Apr 16, 2023Updated 2 years ago
- [NeurIPS 2024] Official implementation of the paper "Interfacing Foundation Models' Embeddings"β131Aug 21, 2024Updated last year
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMsβ98Jan 16, 2025Updated last year
- β125Jul 29, 2024Updated last year
- (ECCVW 2025)GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interestβ551Jun 3, 2025Updated 9 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"β892Aug 13, 2024Updated last year
- Grounded Language-Image Pre-trainingβ2,585Jan 24, 2024Updated 2 years ago
- β90Jul 4, 2024Updated last year