microsoft / LLM2CLIPLinks
LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.
☆531Updated 2 weeks ago
Alternatives and similar repositories for LLM2CLIP
Users that are interested in LLM2CLIP are comparing it to the libraries listed below
Sorting:
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆382Updated 2 months ago
- When do we not need larger vision models?☆401Updated 5 months ago
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆385Updated last year
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆325Updated last year
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆237Updated 11 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆829Updated 11 months ago
- [ICML 2025] Official PyTorch implementation of LongVU☆388Updated 2 months ago
- [CVPR 2025 Highlight] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for C…☆260Updated 6 months ago
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR 2025]☆313Updated last week
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆374Updated 2 months ago
- ☆366Updated 5 months ago
- Long Context Transfer from Language to Vision☆384Updated 3 months ago
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆489Updated 11 months ago
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆311Updated 4 months ago
- [ACL 2025 🔥] Rethinking Step-by-step Visual Reasoning in LLMs☆304Updated last month
- E5-V: Universal Embeddings with Multimodal Large Language Models☆257Updated 6 months ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆206Updated last year
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆597Updated 2 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆894Updated last month
- [Fully open] [Encoder-free MLLM] Vision as LoRA☆316Updated last month
- NeurIPS 2024 Paper: A Unified Pixel-level Vision LLM for Understanding, Generating, Segmenting, Editing☆554Updated 8 months ago
- Code for ChatRex: Taming Multimodal LLM for Joint Perception and Understanding☆194Updated 5 months ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆427Updated 2 months ago
- A Framework of Small-scale Large Multimodal Models☆859Updated 2 months ago
- [ICLR 2025] Diffusion Feedback Helps CLIP See Better☆283Updated 5 months ago
- [ICML'24 Oral] "MagicLens: Self-Supervised Image Retrieval with Open-Ended Instructions"☆183Updated 8 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆334Updated last week
- Official Repository of paper VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding☆279Updated 3 months ago
- PyTorch Implementation of "V* : Guided Visual Search as a Core Mechanism in Multimodal LLMs"☆644Updated last year
- Official repository for the paper PLLaVA☆660Updated 11 months ago