microsoft / LLM2CLIPLinks
LLM2CLIP makes SOTA pretrained CLIP model more SOTA ever.
☆527Updated 3 months ago
Alternatives and similar repositories for LLM2CLIP
Users that are interested in LLM2CLIP are comparing it to the libraries listed below
Sorting:
- LLaVA-UHD v2: an MLLM Integrating High-Resolution Semantic Pyramid via Hierarchical Window Transformer☆380Updated 2 months ago
- [ECCV 2024] official code for "Long-CLIP: Unlocking the Long-Text Capability of CLIP"☆817Updated 10 months ago
- [CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts☆324Updated 11 months ago
- [ICLR2025] LLaVA-HR: High-Resolution Large Language-Vision Assistant☆238Updated 10 months ago
- When do we not need larger vision models?☆395Updated 4 months ago
- ☆363Updated 4 months ago
- [ICLR 2025 Spotlight] OmniCorpus: A Unified Multimodal Corpus of 10 Billion-Level Images Interleaved with Text☆368Updated last month
- A minimal codebase for finetuning large multimodal models, supporting llava-1.5/1.6, llava-interleave, llava-next-video, llava-onevision,…☆301Updated 4 months ago
- ✨✨[CVPR 2025] Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis☆569Updated last month
- [ACL 2025 🔥] Rethinking Step-by-step Visual Reasoning in LLMs☆302Updated last month
- [CVPR 2025 Highlight] The official CLIP training codebase of Inf-CL: "Breaking the Memory Barrier: Near Infinite Batch Size Scaling for C…☆257Updated 5 months ago
- E5-V: Universal Embeddings with Multimodal Large Language Models☆256Updated 6 months ago
- Long Context Transfer from Language to Vision☆382Updated 3 months ago
- [ICML 2025] Official PyTorch implementation of LongVU☆383Updated last month
- This repo contains the code for "VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks" [ICLR25]☆267Updated last week
- VisionLLaMA: A Unified LLaMA Backbone for Vision Tasks☆385Updated 11 months ago
- [CVPR 2024] Official implementation of "ViTamin: Designing Scalable Vision Models in the Vision-language Era"☆206Updated last year
- [ICLR 2024 & ECCV 2024] The All-Seeing Projects: Towards Panoptic Visual Recognition&Understanding and General Relation Comprehension of …☆487Updated 10 months ago
- [CVPR 2024] TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding☆378Updated last month
- ☆338Updated last year
- Anole: An Open, Autoregressive and Native Multimodal Models for Interleaved Image-Text Generation☆769Updated last week
- [ECCV 2024 Oral] Code for paper: An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Langua…☆439Updated 5 months ago
- EVE Series: Encoder-Free Vision-Language Models from BAAI☆332Updated 3 months ago
- LaVIT: Empower the Large Language Model to Understand and Generate Visual Content☆584Updated 8 months ago
- Tarsier -- a family of large-scale video-language models, which is designed to generate high-quality video descriptions , together with g…☆409Updated 2 months ago
- OpenVision: A Fully-Open, Cost-Effective Family of Advanced Vision Encoders for Multimodal Learning☆264Updated last month
- Official repository for the paper PLLaVA☆657Updated 10 months ago
- Explore the Multimodal “Aha Moment” on 2B Model☆594Updated 3 months ago
- This is the first paper to explore how to effectively use RL for MLLMs and introduce Vision-R1, a reasoning MLLM that leverages cold-sta…☆613Updated last week
- [ECCV 2024] ShareGPT4V: Improving Large Multi-modal Models with Better Captions☆223Updated 11 months ago